Search Results

Search found 6346 results on 254 pages for 'wordpress members'.

Page 252/254 | < Previous Page | 248 249 250 251 252 253 254  | Next Page >

  • need to align part of list item to right of li - using CSS3 Jquery column-layout

    - by Brad
    Using this jquery script to acheive CSS3 3-columns, to display a list of members alphabetically. I need it to display this way, which is does: A D B E C F Here is what I am using http://www.csscripting.com/css-multi-column/example6.php? (using this js file http://www.csscripting.com/js/v1.0beta/css3-multi-column.js) To the right of each member, it has their phone extension, which I want to float to the right, so it easy to read. I tried putting the phone extension within a div and span and when I do that, it tends to screw up at the last item in each column, by placing the person's name correctly, but their extension is the very first item in the next column. Screenshot: http://cl.ly/fq4 of what it is doing HTML Code: <div class="Article3Col"> <ul> <li>Doe, John <div style="float:right;"> 8317 </div> </li> <li>Doe, Sally <div style="float:right;"> 8729 </div> </li> <li>Doe, John <div style="float:right;"> 8317 </div> </li> <li>Doe, Sally <div style="float:right;"> 8729 </div> </li> <li>Doe, John <div style="float:right;"> 8317 </div> </li> <li>Doe, Sally <div style="float:right;"> 8729 </div> </li> <li>Doe, John <div style="float:right;"> 8317 </div> </li> <li>Doe, Sally <div style="float:right;"> 8729 </div> </li> <li>Doe, John <div style="float:right;"> 8317 </div> </li> <li>Doe, Sally <div style="float:right;"> 8729 </div> </li> <li>Doe, John <div style="float:right;"> 8317 </div> </li> <li>Doe, Sally <div style="float:right;"> 8729 </div> </li> <li>Doe, John <div style="float:right;"> 8317 </div> </li> <li>Doe, Sally <div style="float:right;"> 8729 </div> </li> <li>Doe, John <div style="float:right;"> 8317 </div> </li> <li>Doe, Sally <div style="float:right;"> 8729 </div> </li> <li>Doe, John <div style="float:right;"> 8317 </div> </li> <li>Doe, Sally <div style="float:right;"> 8729 </div> </li> </ul> </div> CSS: .Article3Col { column-count:3; } Any help is appreciated.

    Read the article

  • Prefer extension methods for encapsulation and reusability?

    - by tzaman
    edit4: wikified, since this seems to have morphed more into a discussion than a specific question. In C++ programming, it's generally considered good practice to "prefer non-member non-friend functions" instead of instance methods. This has been recommended by Scott Meyers in this classic Dr. Dobbs article, and repeated by Herb Sutter and Andrei Alexandrescu in C++ Coding Standards (item 44); the general argument being that if a function can do its job solely by relying on the public interface exposed by the class, it actually increases encapsulation to have it be external. While this confuses the "packaging" of the class to some extent, the benefits are generally considered worth it. Now, ever since I've started programming in C#, I've had a feeling that here is the ultimate expression of the concept that they're trying to achieve with "non-member, non-friend functions that are part of a class interface". C# adds two crucial components to the mix - the first being interfaces, and the second extension methods: Interfaces allow a class to formally specify their public contract, the methods and properties that they're exposing to the world. Any other class can choose to implement the same interface and fulfill that same contract. Extension methods can be defined on an interface, providing any functionality that can be implemented via the interface to all implementers automatically. And best of all, because of the "instance syntax" sugar and IDE support, they can be called the same way as any other instance method, eliminating the cognitive overhead! So you get the encapsulation benefits of "non-member, non-friend" functions with the convenience of members. Seems like the best of both worlds to me; the .NET library itself providing a shining example in LINQ. However, everywhere I look I see people warning against extension method overuse; even the MSDN page itself states: In general, we recommend that you implement extension methods sparingly and only when you have to. (edit: Even in the current .NET library, I can see places where it would've been useful to have extensions instead of instance methods - for example, all of the utility functions of List<T> (Sort, BinarySearch, FindIndex, etc.) would be incredibly useful if they were lifted up to IList<T> - getting free bonus functionality like that adds a lot more benefit to implementing the interface.) So what's the verdict? Are extension methods the acme of encapsulation and code reuse, or am I just deluding myself? (edit2: In response to Tomas - while C# did start out with Java's (overly, imo) OO mentality, it seems to be embracing more multi-paradigm programming with every new release; the main thrust of this question is whether using extension methods to drive a style change (towards more generic / functional C#) is useful or worthwhile..) edit3: overridable extension methods The only real problem identified so far with this approach, is that you can't specialize extension methods if you need to. I've been thinking about the issue, and I think I've come up with a solution. Suppose I have an interface MyInterface, which I want to extend - I define my extension methods in a MyExtension static class, and pair it with another interface, call it MyExtensionOverrider. MyExtension methods are defined according to this pattern: public static int MyMethod(this MyInterface obj, int arg, bool attemptCast=true) { if (attemptCast && obj is MyExtensionOverrider) { return ((MyExtensionOverrider)obj).MyMethod(arg); } // regular implementation here } The override interface mirrors all of the methods defined in MyExtension, except without the this or attemptCast parameters: public interface MyExtensionOverrider { int MyMethod(int arg); string MyOtherMethod(); } Now, any class can implement the interface and get the default extension functionality: public class MyClass : MyInterface { ... } Anyone that wants to override it with specific implementations can additionally implement the override interface: public class MySpecializedClass : MyInterface, MyExtensionOverrider { public int MyMethod(int arg) { //specialized implementation for one method } public string MyOtherMethod() { // fallback to default for others MyExtension.MyOtherMethod(this, attemptCast: false); } } And there we go: extension methods provided on an interface, with the option of complete extensibility if needed. Fully general too, the interface itself doesn't need to know about the extension / override, and multiple extension / override pairs can be implemented without interfering with each other. I can see three problems with this approach - It's a little bit fragile - the extension methods and override interface have to be kept synchronized manually. It's a little bit ugly - implementing the override interface involves boilerplate for every function you don't want to specialize. It's a little bit slow - there's an extra bool comparison and cast attempt added to the mainline of every method. Still, all those notwithstanding, I think this is the best we can get until there's language support for interface functions. Thoughts?

    Read the article

  • Fragment shaders on a texture

    - by Snowangelic
    Hello stack overflow. I am trying to add some post-processing capabilities to a program. The rendering is done using openGL. I just want to allow the program to load some home made fragment shader and use them on the video stream. I wrote a little piece of shader using "OpenGL Shader Builder" that just turns a texture in grayscale. The shaders works well in the shader builder but I can't make it work in the main program. The screens stays all black. Here is the setup : @implementation PluginGLView - (id) initWithCoder: (NSCoder *) coder { const GLubyte * strExt; if ((self = [super initWithCoder:coder]) == nil) return nil; glLock = [[NSLock alloc] init]; if (nil == glLock) { [self release]; return nil; } // Init pixel format attribs NSOpenGLPixelFormatAttribute attrs[] = { NSOpenGLPFAAccelerated, NSOpenGLPFANoRecovery, NSOpenGLPFADoubleBuffer, 0 }; // Get pixel format from OpenGL NSOpenGLPixelFormat* pixFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs]; if (!pixFmt) { NSLog(@"No Accelerated OpenGL pixel format found\n"); NSOpenGLPixelFormatAttribute attrs2[] = { NSOpenGLPFANoRecovery, 0 }; // Get pixel format from OpenGL pixFmt = [[NSOpenGLPixelFormat alloc] initWithAttributes:attrs2]; if (!pixFmt) { NSLog(@"No OpenGL pixel format found!\n"); [self release]; return nil; } } [self setPixelFormat:[pixFmt autorelease]]; /* long swapInterval = 1 ; [[self openGLContext] setValues:&swapInterval forParameter:NSOpenGLCPSwapInterval]; */ [glLock lock]; [[self openGLContext] makeCurrentContext]; // Init object members strExt = glGetString (GL_EXTENSIONS); texture_range = gluCheckExtension ((const unsigned char *)"GL_APPLE_texture_range", strExt) ? GL_TRUE : GL_FALSE; texture_hint = GL_STORAGE_SHARED_APPLE ; client_storage = gluCheckExtension ((const unsigned char *)"GL_APPLE_client_storage", strExt) ? GL_TRUE : GL_FALSE; rect_texture = gluCheckExtension((const unsigned char *)"GL_EXT_texture_rectangle", strExt) ? GL_TRUE : GL_FALSE; // Setup some basic OpenGL stuff glPixelStorei(GL_UNPACK_ALIGNMENT, 1); glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_REPLACE); glColor4f(1.0f, 1.0f, 1.0f, 1.0f); glClearColor(0.0f, 0.0f, 0.0f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // Loads the shaders shader=LoadShader(GL_FRAGMENT_SHADER,"/Users/alexandremathieu/fragment.fs"); program=glCreateProgram(); glAttachShader(program, shader); glLinkProgram(program); glUseProgram(program); [NSOpenGLContext clearCurrentContext]; [glLock unlock]; image_width = 1024; image_height = 512; image_depth = 16; image_type = GL_UNSIGNED_SHORT_1_5_5_5_REV; image_base = (GLubyte *) calloc(((IMAGE_COUNT * image_width * image_height) / 3) * 4, image_depth >> 3); if (image_base == nil) { [self release]; return nil; } // Create and load textures for the first time [self loadTextures:GL_TRUE]; // Init fps timer //gettimeofday(&cycle_time, NULL); drawBG = YES; // Call for a redisplay noDisplay = YES; PSXDisplay.Disabled = 1; [self setNeedsDisplay:true]; return self; } And here is the "render screen" function wich basically...renders the screen. - (void)renderScreen { int bufferIndex = whichImage; glBindTexture(GL_TEXTURE_RECTANGLE_EXT, bufferIndex+1); glUseProgram(program); int loc=glGetUniformLocation(program, "texture"); glUniform1i(loc,bufferIndex+1); glTexSubImage2D(GL_TEXTURE_RECTANGLE_EXT, 0, 0, 0, image_width, image_height, GL_BGRA, image_type, image[bufferIndex]); glBegin(GL_QUADS); glTexCoord2f(0.0f, 0.0f); glVertex2f(-1.0f, 1.0f); glTexCoord2f(0.0f, image_height); glVertex2f(-1.0f, -1.0f); glTexCoord2f(image_width, image_height); glVertex2f(1.0f, -1.0f); glTexCoord2f(image_width, 0.0f); glVertex2f(1.0f, 1.0f); glEnd(); [[self openGLContext] flushBuffer]; [NSOpenGLContext clearCurrentContext]; //[glLock unlock]; } and finally here's the shader. uniform sampler2DRect texture; void main() { vec4 color, texel; color = gl_Color; texel = texture2DRect(texture, gl_TexCoord[0].xy); color *= texel; // Begin Shader float gray=0.0; gray+=(color.r + color.g + color.b)/3.0; color=vec4(gray,gray,gray,color.a); // End Shader gl_FragColor = color; } The loading and using of shaders works since I am able to turn the screen all red with this shader void main(){ gl_FragColor=vec4(1.0,0.0,0.0,1.0); } If the shader contains a syntax error I get an error message from the LoadShader function etc. If I remove the use of the shader, everything works normally. I think the problem comes from the "passing the texture as a uniform parameter" thing. But these are my very firsts step with openGL and I cant be sure of anything. Don't hesitate to ask for more info. Thank you Stack O.

    Read the article

  • Mutating the expression tree of a predicate to target another type

    - by Jon
    Intro In the application I 'm currently working on, there are two kinds of each business object: the "ActiveRecord" type, and the "DataContract" type. So for example, we have: namespace ActiveRecord { class Widget { public int Id { get; set; } } } namespace DataContracts { class Widget { public int Id { get; set; } } } The database access layer takes care of "translating" between hierarchies: you can tell it to update a DataContracts.Widget, and it will magically create an ActiveRecord.Widget with the same property values and save that. The problem I have surfaced when attempting to refactor this database access layer. The Problem I want to add methods like the following to the database access layer: // Widget is DataContract.Widget interface DbAccessLayer { IEnumerable<Widget> GetMany(Expression<Func<Widget, bool>> predicate); } The above is a simple general-use "get" method with custom predicate. The only point of interest is that I 'm not passing in an anonymous function but rather an expression tree. This is done because inside DbAccessLayer we have to query ActiveRecord.Widget efficiently (LINQ to SQL) and not have the database return all ActiveRecord.Widget instances and then filter the enumerable collection. We need to pass in an expression tree, so we ask for one as the parameter for GetMany. The snag: the parameter we have needs to be magically transformed from an Expression<Func<DataContract.Widget, bool>> to an Expression<Func<ActiveRecord.Widget, bool>>. This is where I haven't managed to pull it off... Attempted Solution What we 'd like to do inside GetMany is: IEnumerable<DataContract.Widget> GetMany( Expression<Func<DataContract.Widget, bool>> predicate) { var lambda = Expression.Lambda<Func<ActiveRecord.Widget, bool>>( predicate.Body, predicate.Parameters); // use lambda to query ActiveRecord.Widget and return some value } This won't work because in a typical scenario, for example if: predicate == w => w.Id == 0; ...the expression tree contains a MemberAccessExpression instance which has a MemberInfo property (named Member) that point to members of DataContract.Widget. There are also ParameterExpression instances both in the expression tree and in its parameter expression collection (predicate.Parameters); After searching a bit, I found System.Linq.Expressions.ExpressionVisitor (its source can be found here in the context of a how-to, very helpful) which is a convenient way to modify an expression tree. Armed with this, I implemented a visitor. This simple visitor only takes care of changing the types in member access and parameter expressions. It may not be complete, but it's fine for the expression w => w.Id == 0. internal class Visitor : ExpressionVisitor { private readonly Func<Type, Type> dataContractToActiveRecordTypeConverter; public Visitor(Func<Type, Type> dataContractToActiveRecordTypeConverter) { this.dataContractToActiveRecordTypeConverter = dataContractToActiveRecordTypeConverter; } protected override Expression VisitMember(MemberExpression node) { var dataContractType = node.Member.ReflectedType; var activeRecordType = this.dataContractToActiveRecordTypeConverter(dataContractType); var converted = Expression.MakeMemberAccess( base.Visit(node.Expression), activeRecordType.GetProperty(node.Member.Name)); return converted; } protected override Expression VisitParameter(ParameterExpression node) { var dataContractType = node.Type; var activeRecordType = this.dataContractToActiveRecordTypeConverter(dataContractType); return Expression.Parameter(activeRecordType, node.Name); } } With this visitor, GetMany becomes: IEnumerable<DataContract.Widget> GetMany( Expression<Func<DataContract.Widget, bool>> predicate) { var visitor = new Visitor(...); var lambda = Expression.Lambda<Func<ActiveRecord.Widget, bool>>( visitor.Visit(predicate.Body), predicate.Parameters.Select(p => visitor.Visit(p)); var widgets = ActiveRecord.Widget.Repository().Where(lambda); // This is just for reference, see below Expression<Func<ActiveRecord.Widget, bool>> referenceLambda = w => w.Id == 0; // Here we 'd convert the widgets to instances of DataContract.Widget and // return them -- this has nothing to do with the question though. } Results The good news is that lambda is constructed just fine. The bad news is that it isn't working; it's blowing up on me when I try to use it (the exception messages are really not helpful at all). I have examined the lambda my code produces and a hardcoded lambda with the same expression; they look exactly the same. I spent hours in the debugger trying to find some difference, but I can't. When predicate is w => w.Id == 0, lambda looks exactly like referenceLambda. But the latter works with e.g. IQueryable<T>.Where, while the former does not (I have tried this in the immediate window of the debugger). I should also mention that when predicate is w => true, it all works just fine. Therefore I am assuming that I 'm not doing enough work in Visitor, but I can't find any more leads to follow on. Can someone point me in the right direction? Thanks in advance for your help!

    Read the article

  • LevelToVisibilityConverter in silverligt 4

    - by prince23
    <UserControl x:Class="SLGridImage.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" mc:Ignorable="d" d:DesignHeight="300" d:DesignWidth="400" xmlns:sdk="http://schemas.microsoft.com/winfx/2006/xaml/presentation/sdk"> <UserControl.Resources> <local:LevelToVisibilityConverter x:Key="LevelToVisibility" /> </UserControl.Resources> <Grid x:Name="LayoutRoot" Background="White"> <sdk:DataGrid x:Name="dgMarks" CanUserResizeColumns="False" SelectionMode="Single" AutoGenerateColumns="False" VerticalAlignment="Top" ItemsSource="{Binding MarkCollection}" IsReadOnly="True" Margin="13,44,0,0" RowDetailsVisibilityMode="Collapsed" Height="391" HorizontalAlignment="Left" Width="965" VerticalScrollBarVisibility="Visible" > <sdk:DataGrid.Columns> <sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Button x:Name="myButton" Click="myButton_Click"> <StackPanel Orientation="Horizontal"> <Image Margin="2, 2, 2, 2" x:Name="imgMarks" Stretch="Fill" Width="12" Height="12" Source="Images/test.png" VerticalAlignment="Center" HorizontalAlignment="Center" Visibility="{Binding Level, Converter={StaticResource LevelToVisibility}}" /> <TextBlock Text="{Binding Level}" TextWrapping="NoWrap" ></TextBlock> </StackPanel> </Button> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn Header="Name" > <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate > <Border> <TextBlock Text="{Binding Name}" /> </Border> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> <sdk:DataGridTemplateColumn Header="Marks" Width="80"> <sdk:DataGridTemplateColumn.CellTemplate> <DataTemplate> <Border> <TextBlock Text="{Binding Marks}" /> </Border> </DataTemplate> </sdk:DataGridTemplateColumn.CellTemplate> </sdk:DataGridTemplateColumn> </sdk:DataGrid.Columns> </sdk:DataGrid> </Grid> </UserControl> in .cs using System; using System.Collections.Generic; using System.Linq; using System.Net; using System.Windows; using System.Windows.Controls; using System.Windows.Documents; using System.Windows.Input; using System.Windows.Media; using System.Windows.Media.Animation; using System.Windows.Shapes; using System.Collections.ObjectModel; using System.ComponentModel; namespace SLGridImage { public partial class MainPage : UserControl { private MarksViewModel model = new MarksViewModel(); public MainPage() { InitializeComponent(); this.DataContext = model; } private void myButton_Click(object sender, RoutedEventArgs e) { } } public class MarksViewModel : INotifyPropertyChanged { public MarksViewModel() { markCollection.Add(new Mark() { Name = "ABC", Marks = 23, Level = 0 }); markCollection.Add(new Mark() { Name = "XYZ", Marks = 67, Level = 1 }); markCollection.Add(new Mark() { Name = "YU", Marks = 56, Level = 0 }); markCollection.Add(new Mark() { Name = "AAA", Marks = 89, Level = 1 }); } private ObservableCollection<Mark> markCollection = new ObservableCollection<Mark>(); public ObservableCollection<Mark> MarkCollection { get { return this.markCollection; } set { this.markCollection = value; OnPropertyChanged("MarkCollection"); } } public event PropertyChangedEventHandler PropertyChanged; public void OnPropertyChanged(string propName) { if (PropertyChanged != null) this.PropertyChanged(this, new PropertyChangedEventArgs(propName)); } } public class Mark { public string Name { get; set; } public int Marks { get; set; } public int Level { get; set; } } public class LevelToVisibilityConverter : System.Windows.Data.IValueConverter { #region IValueConverter Members public object Convert(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { Visibility isVisible = Visibility.Collapsed; if ((value == null)) return isVisible; int condition = (int)value; isVisible = condition == 1 ? Visibility.Visible : Visibility.Collapsed; return isVisible; } public object ConvertBack(object value, Type targetType, object parameter, System.Globalization.CultureInfo culture) { throw new NotImplementedException(); } #endregion } } when i run getting error The type 'local:LevelToVisibilityConverter' was not found. Verify that you are not missing an assembly reference and that all referenced assemblies have been built. what i am i missing here looking forward for an solution thank you

    Read the article

  • C# Memoization of functions with arbitrary number of arguments

    - by Lirik
    I'm trying to create a memoization interface for functions with arbitrary number of arguments, but I'm failing miserably. The first thing I tried is to define an interface for a function which gets memoized automatically upon execution: class EMAFunction:IFunction { Dictionary<List<object>, List<object>> map; class EMAComparer : IEqualityComparer<List<object>> { private int _multiplier = 97; public bool Equals(List<object> a, List<object> b) { List<object> aVals = (List<object>)a[0]; int aPeriod = (int)a[1]; List<object> bVals = (List<object>)b[0]; int bPeriod = (int)b[1]; return (aVals.Count == bVals.Count) && (aPeriod == bPeriod); } public int GetHashCode(List<object> obj) { // Don't compute hash code on null object. if (obj == null) { return 0; } // Get length. int length = obj.Count; List<object> vals = (List<object>) obj[0]; int period = (int) obj[1]; return (_multiplier * vals.GetHashCode() * period.GetHashCode()) + length;; } } public EMAFunction() { NumParams = 2; Name = "EMA"; map = new Dictionary<List<object>, List<object>>(new EMAComparer()); } #region IFunction Members public int NumParams { get; set; } public string Name { get; set; } public object Execute(List<object> parameters) { if (parameters.Count != NumParams) throw new ArgumentException("The num params doesn't match!"); if (!map.ContainsKey(parameters)) { //map.Add(parameters, List<double> values = new List<double>(); List<object> asObj = (List<object>)parameters[0]; foreach (object val in asObj) { values.Add((double)val); } int period = (int)parameters[1]; asObj.Clear(); List<double> ema = TechFunctions.ExponentialMovingAverage(values, period); foreach (double val in ema) { asObj.Add(val); } map.Add(parameters, asObj); } return map[parameters]; } public void ClearMap() { map.Clear(); } #endregion } Here are my tests of the function: private void MemoizeTest() { DataSet dataSet = DataLoader.LoadData(DataLoader.DataSource.FROM_WEB, 1024); List<String> labels = dataSet.DataLabels; Stopwatch sw = new Stopwatch(); IFunction emaFunc = new EMAFunction(); List<object> parameters = new List<object>(); int numRuns = 1000; long sumTicks = 0; parameters.Add(dataSet.GetValues("open")); parameters.Add(12); // First call for(int i = 0; i < numRuns; ++i) { emaFunc.ClearMap();// remove any memoization mappings sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks not-memoized " + (sumTicks/numRuns)); sumTicks = 0; // Repeat call for (int i = 0; i < numRuns; ++i) { sw.Start(); emaFunc.Execute(parameters); sw.Stop(); sumTicks += sw.ElapsedTicks; } Console.WriteLine("Average ticks memoized " + (sumTicks/numRuns)); } The performance is confusing me... I expected the memoized function to be faster, but it didn't work out that way: Average ticks not-memoized 106,182 Average ticks memoized 198,854 I tried doubling the data instances to 2048, but the results were about the same: Average ticks not-memoized 232,579 Average ticks memoized 446,280 I did notice that it was correctly finding the parameters in the map and it going directly to the map, but the performance was still slow... I'm either open for troubleshooting help with this example, or if you have a better solution to the problem then please let me know what it is.

    Read the article

  • C++: Declaration of template class member specialization (+ Doxygen bonus question!)

    - by Ziv
    When I specialize a (static) member function/constant in a template class, I'm confused as to where the declaration is meant to go. Here's an example of what I what to do - yoinked directly from IBM's reference on template specialization: template<class T> class X { public: static T v; static void f(T); }; template<class T> T X<T>::v = 0; template<class T> void X<T>::f(T arg) { v = arg; } template<> char* X<char*>::v = "Hello"; template<> void X<float>::f(float arg) { v = arg * 2; } int main() { X<char*> a, b; X<float> c; c.f(10); // X<float>::v now set to 20 } The question is, how do I divide this into header/cpp files? The generic implementation is obviously in the header, but what about the specialization? It can't go in the header file, because it's concrete, leading to multiple definition. But if it goes into the .cpp file, is code which calls X::f() aware of the specialization, or might it rely on the generic X::f()? So far I've got the specialization in the .cpp only, with no declaration in the header. I'm not having trouble compiling or even running my code (on gcc, don't remember the version at the moment), and it behaves as expected - recognizing the specialization. But A) I'm not sure this is correct, and I'd like to know what is, and B) my Doxygen documentation comes out wonky and very misleading (more on that in a moment). What seems most natural to me would be something like this, declaring the specialization in the header and defining it in the .cpp: ===XClass.hpp=== #ifndef XCLASS_HPP #define XCLASS_HPP template<class T> class X { public: static T v; static void f(T); }; template<class T> T X<T>::v = 0; template<class T> void X<T>::f(T arg) { v = arg; } /* declaration of specialized functions */ template<> char* X<char*>::v; template<> void X<float>::f(float arg); #endif ===XClass.cpp=== #include <XClass.hpp> /* concrete implementation of specialized functions */ template<> char* X<char*>::v = "Hello"; template<> void X<float>::f(float arg) { v = arg * 2; } ...but I have no idea if this is correct. The most immediate consequence of this issue, as I mentioned, is my Doxygen documentation, which doesn't seem to warm to the idea of member specialization, at least the way I'm defining it at the moment. It will always present only the first definition it finds of a function/constant, and I really need to be able to present the specializations as well. If I go so far as to re-declare the entire class, i.e. in the header: /* template declaration */ template<class T> class X { public: static T v; static void f(T); }; /* template member definition */ template<class T> T X<T>::v = 0; template<class T> void X<T>::f(T arg) { v = arg; } /* declaration of specialized CLASS (with definitions in .cpp) */ template<> class X<float> { public: static float v; static void f(float); }; then it will display the different variations of X as different classes (which is fine by me), but I don't know how to get the same effect when specializing only a few select members of the class. I don't know if this is a mistake of mine, or a limitation of Doxygen - any ideas? Thanks much, Ziv

    Read the article

  • Using a boost::fusion::map in boost::spirit::karma

    - by user1097105
    I am using boost spirit to parse some text files into a data structure and now I am beginning to generate text from this data structure (using spirit karma). One attempt at a data structure is a boost::fusion::map (as suggested in an answer to this question). But although I can use boost::spirit::qi::parse() and get data in it easily, when I tried to generate text from it using karma, I failed. Below is my attempt (look especially at the "map_data" type). After some reading and playing around with other fusion types, I found boost::fusion::vector and BOOST_FUSION_DEFINE_ASSOC_STRUCT. I succeeded to generate output with both of them, but they don't seem ideal: in vector you cannot access a member using a name (it is like a tuple) -- and in the other solution, I don't think I need both ways (member name and key type) to access the members. #include <iostream> #include <string> #include <boost/spirit/include/karma.hpp> #include <boost/fusion/include/map.hpp> #include <boost/fusion/include/make_map.hpp> #include <boost/fusion/include/vector.hpp> #include <boost/fusion/include/as_vector.hpp> #include <boost/fusion/include/transform.hpp> struct sb_key; struct id_key; using boost::fusion::pair; typedef boost::fusion::map < pair<sb_key, int> , pair<id_key, unsigned long> > map_data; typedef boost::fusion::vector < int, unsigned long > vector_data; #include <boost/fusion/include/define_assoc_struct.hpp> BOOST_FUSION_DEFINE_ASSOC_STRUCT( (), assocstruct_data, (int, a, sb_key) (unsigned long, b, id_key)) namespace karma = boost::spirit::karma; template <typename X> std::string to_string ( const X& data ) { std::string generated; std::back_insert_iterator<std::string> sink(generated); karma::generate_delimited ( sink, karma::int_ << karma::ulong_, karma::space, data ); return generated; } int main() { map_data d1(boost::fusion::make_map<sb_key, id_key>(234, 35314988526ul)); vector_data d2(boost::fusion::make_vector(234, 35314988526ul)); assocstruct_data d3(234,35314988526ul); std::cout << "map_data as_vector: " << boost::fusion::as_vector(d1) << std::endl; //std::cout << "map_data to_string: " << to_string(d1) << std::endl; //*FAIL No 1* std::cout << "at_key (sb_key): " << boost::fusion::at_key<sb_key>(d1) << boost::fusion::at_c<0>(d1) << std::endl << std::endl; std::cout << "vector_data: " << d2 << std::endl; std::cout << "vector_data to_string: " << to_string(d2) << std::endl << std::endl; std::cout << "assoc_struct as_vector: " << boost::fusion::as_vector(d3) << std::endl; std::cout << "assoc_struct to_string: " << to_string(d3) << std::endl; std::cout << "at_key (sb_key): " << boost::fusion::at_key<sb_key>(d3) << d3.a << boost::fusion::at_c<0>(d3) << std::endl; return 0; } Including the commented line gives lots of pages of compilation errors, among which notably something like: no known conversion for argument 1 from ‘boost::fusion::pair’ to ‘double’ no known conversion for argument 1 from ‘boost::fusion::pair’ to ‘float’ Might it be that to_string needs the values of the map_data, and not the pairs? Though I am not good with templates, I tried to get a vector from a map using transform in the following way template <typename P> struct take_second { typename P::second_type operator() (P p) { return p.second; } }; // ... inside main() pair <char, int> ff(32); std::cout << "take_second (expect 32): " << take_second<pair<char,int>>()(ff) << std::endl; std::cout << "transform map_data and to_string: " << to_string(boost::fusion::transform(d1, take_second<>())); //*FAIL No 2* But I don't know what types am I supposed to give when instantiating take_second and anyway I think there must be an easier way to get (iterate over) the values of a map (is there?) If you answer this question, please also give your opinion on whether using an ASSOC_STRUCT or a map is better.

    Read the article

  • Anyone succeeded at injecting Interfaces into Entity Framework 4 Entities, using T4?

    - by Ciel
    Hello: POCO sort of leaves me wanting: (how can I say I use DI/IoC, if the Repository is not the only place that is creating the entities?)...hence my desire to lock it down, get rid of the temptation of newing up POCOs or EntityObjects anywhere in the code, and just allowing entity interfaces above the Repository/Factory layer. For a second there, I nearly thought I had it...was editing EF4's T4 in order to inject in an Interface def. Was going swimmingly, compiled and worked, until I got to the Associations... I wrapped them with a ICollection, and renamed the underlying original collection with a prefix of Wrapped. Unfortunately, when run, throws an error: //The Member 'WrappedSubExamples' in the CLR type 'XAct.App.Data.Model.EF4.Example' is not present in the conceptual model type 'XAct.App.Data.Model.Entity.Example'. var examples = context2.CreateObjectSet(); My T4 segment I used was (this may not work, as it's the longest code snippet I've ever posted here...sorry): #region Generic Property Abstraction <# if (navProperty.ToEndMember.RelationshipMultiplicity == RelationshipMultiplicity.Many) {#> //XAct.App Generic Wrapper: <#=code.SpaceAfter(NewModifier(navProperty))#><#=Accessibility.ForProperty(navProperty)#> ICollection<I<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#>> <#=code.Escape(navProperty)#> { get { if (_X<#=code.Escape(navProperty)# == null){ _X<#=code.Escape(navProperty)# = new WrappedCollection,<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#(this.<#=(navProperty.ToEndMember.RelationshipMultiplicity == RelationshipMultiplicity.Many)?"Wrapped":""#<#=code.Escape(navProperty)#); } return _X<#=code.Escape(navProperty)#; } } private ICollection _X<#=code.Escape(navProperty)#; <# } else { # <#=code.SpaceAfter(NewModifier(navProperty))#<#=Accessibility.ForProperty(navProperty)# I<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)# <#=code.Escape(navProperty)# { get { return (I<#=code.Escape(navProperty)#)this.Wrapped<#=code.Escape(navProperty)#; } set { this.Wrapped<#=code.Escape(navProperty)# = value as <#=code.Escape(navProperty)#; } } <# } # #endregion which then wraps the original collection, renamed with the prefix 'Wrapped': /// <summary> /// <#=SummaryComment(navProperty)#> /// </summary><#=LongDescriptionCommentElement(navProperty, region.CurrentIndentLevel) #> [XmlIgnoreAttribute()] [SoapIgnoreAttribute()] [DataMemberAttribute()] [EdmRelationshipNavigationPropertyAttribute("<#=navProperty.RelationshipType.NamespaceName#>", "<#=navProperty.RelationshipType.Name#>", "<#=navProperty.ToEndMember.Name#>")] <# if (navProperty.ToEndMember.RelationshipMultiplicity == RelationshipMultiplicity.Many) { #> <#=code.SpaceAfter(NewModifier(navProperty))#><#=Accessibility.ForProperty(navProperty)#> EntityCollection<<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#>> Wrapped<#=code.Escape(navProperty)#> { <#=code.SpaceAfter(Accessibility.ForGetter(navProperty))#>get { return ((IEntityWithRelationships)this).RelationshipManager.GetRelatedCollection<<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#>>("<#=navProperty.RelationshipType.FullName#>", "<#=navProperty.ToEndMember.Name#>"); } <#=code.SpaceAfter(Accessibility.ForSetter(navProperty))#>set { if ((value != null)) { ((IEntityWithRelationships)this).RelationshipManager.InitializeRelatedCollection<<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#>>("<#=navProperty.RelationshipType.FullName#>", "<#=navProperty.ToEndMember.Name#>", value); } } } <# } else { #> <#=code.SpaceAfter(NewModifier(navProperty))#><#=Accessibility.ForProperty(navProperty)#> <#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#> Wrapped<#=code.Escape(navProperty)#> { <#=code.SpaceAfter(Accessibility.ForGetter(navProperty))#>get { return ((IEntityWithRelationships)this).RelationshipManager.GetRelatedReference<<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#>>("<#=navProperty.RelationshipType.FullName#>", "<#=navProperty.ToEndMember.Name#>").Value; } <#=code.SpaceAfter(Accessibility.ForSetter(navProperty))#>set { ((IEntityWithRelationships)this).RelationshipManager.GetRelatedReference<<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#>>("<#=navProperty.RelationshipType.FullName#>", "<#=navProperty.ToEndMember.Name#>").Value = value; } } <# string refPropertyName = navProperty.Name + "Reference"; if (entity.Members.Any(m => m.Name == refPropertyName)) { // 6017 is the same error number that EntityClassGenerator uses. Errors.Add(new System.CodeDom.Compiler.CompilerError(SourceCsdlPath, -1, -1, "6017", String.Format(CultureInfo.CurrentCulture, GetResourceString("Template_ConflictingGeneratedNavPropName"), navProperty.Name, entity.FullName, refPropertyName))); } #> /// <summary> /// <#=SummaryComment(navProperty)#> /// </summary><#=LongDescriptionCommentElement(navProperty, region.CurrentIndentLevel)#> [BrowsableAttribute(false)] [DataMemberAttribute()] <#=Accessibility.ForProperty(navProperty)#> EntityReference<<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#>> <#=refPropertyName#> { <#=code.SpaceAfter(Accessibility.ForGetter(navProperty))#>get { return ((IEntityWithRelationships)this).RelationshipManager.GetRelatedReference<<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#>>("<#=navProperty.RelationshipType.FullName#>", "<#=navProperty.ToEndMember.Name#>"); } <#=code.SpaceAfter(Accessibility.ForSetter(navProperty))#>set { if ((value != null)) { ((IEntityWithRelationships)this).RelationshipManager.InitializeRelatedReference<<#=MultiSchemaEscape(navProperty.ToEndMember.GetEntityType(), code)#>>("<#=navProperty.RelationshipType.FullName#>", "<#=navProperty.ToEndMember.Name#>", value); } } } <# } The point is...it bugs out. I've tried various solutions...none worked. Any ideas -- or is this just a wild goose chase, and time to give it up?

    Read the article

  • Animation issue caused by C# parameters passed by reference rather than value, but where?

    - by Jordan Roher
    I'm having trouble with sprite animation in XNA that appears to be caused by a struct passed as a reference value. But I'm not using the ref keyword anywhere. I am, admittedly, a C# noob, so there may be some shallow bonehead error in here, but I can't see it. I'm creating 10 ants or bees and animating them as they move across the screen. I have an array of animation structs, and each time I create an ant or bee, I send it the animation array value it requires (just [0] or [1] at this time). Deep inside the animation struct is a timer that is used to change frames. The ant/bee class stores the animation struct as a private variable. What I'm seeing is that each ant or bee uses the same animation struct, the one I thought I was passing in and copying by value. So during Update(), when I advance the animation timer for each ant/bee, the next ant/bee has its animation timer advanced by that small amount. If there's 1 ant on screen, it animates properly. 2 ants, it runs twice as fast, and so on. Obviously, not what I want. Here's an abridged version of the code. How is BerryPicking's ActorAnimationGroupData[] getting shared between the BerryCreatures? class BerryPicking { private ActorAnimationGroupData[] animations; private BerryCreature[] creatures; private Dictionary<string, Texture2D> creatureTextures; private const int maxCreatures = 5; public BerryPickingExample() { this.creatures = new BerryCreature[maxCreatures]; this.creatureTextures = new Dictionary<string, Texture2D>(); } public void LoadContent() { // Returns data from an XML file Reader reader = new Reader(); animations = reader.LoadAnimations(); CreateCreatures(); } // This is called from another function I'm not including because it's not relevant to the problem. // In it, I remove any creature that passes outside the viewport by setting its creatures[] spot to null. // Hence the if(creatures[i] == null) test is used to recreate "dead" creatures. Inelegant, I know. private void CreateCreatures() { for (int i = 0; i < creatures.Length; i++) { if (creatures[i] == null) { // In reality, the name selection is randomized creatures[i] = new BerryCreature("ant"); // Load content and texture (which I create elsewhere) creatures[i].LoadContent( FindAnimation(creatures[i].Name), creatureTextures[creatures[i].Name]); } } } private ActorAnimationGroupData FindAnimation(string animationName) { int yourAnimation = -1; for (int i = 0; i < animations.Length; i++) { if (animations[i].name == animationName) { yourAnimation = i; break; } } return animations[yourAnimation]; } public void Update(GameTime gameTime) { for (int i = 0; i < creatures.Length; i++) { creatures[i].Update(gameTime); } } } class Reader { public ActorAnimationGroupData[] LoadAnimations() { ActorAnimationGroupData[] animationGroup; XmlReader file = new XmlTextReader(filename); // Do loading... // Then later file.Close(); return animationGroup; } } class BerryCreature { private ActorAnimation animation; private string name; public BerryCreature(string name) { this.name = name; } public void LoadContent(ActorAnimationGroupData animationData, Texture2D sprite) { animation = new ActorAnimation(animationData); animation.LoadContent(sprite); } public void Update(GameTime gameTime) { animation.Update(gameTime); } } class ActorAnimation { private ActorAnimationGroupData animation; public ActorAnimation(ActorAnimationGroupData animation) { this.animation = animation; } public void LoadContent(Texture2D sprite) { this.sprite = sprite; } public void Update(GameTime gameTime) { animation.Update(gameTime); } } struct ActorAnimationGroupData { // There are lots of other members of this struct, but the timer is the only one I'm worried about. // TimerData is another struct private TimerData timer; public ActorAnimationGroupData() { timer = new TimerData(2); } public void Update(GameTime gameTime) { timer.Update(gameTime); } } struct TimerData { public float currentTime; public float maxTime; public TimerData(float maxTime) { this.currentTime = 0; this.maxTime = maxTime; } public void Update(GameTime gameTime) { currentTime += (float)gameTime.ElapsedGameTime.TotalSeconds; if (currentTime >= maxTime) { currentTime = maxTime; } } }

    Read the article

  • Scrollbar still is painted after it should be removed

    - by Walter Williams
    I have the following custom control and can place on a form (with AutoScroll set to true and the control anchored left, top and right). If the form is too short for the control, the form correctly resizes the control (to make room for the scroll) and displays the scroll bar. When the control is closed using the close glyph, the control is resized and the scroll bar is removed, but occasionally the scroll bar appears to remain painted. If the form is minimized or moved off-screen, the leftover paint is removed. I've tried Parent.Invalidate and have toyed with it in many ways but to no avail. Any suggestions? (Using VS 2008 Standard) using System; using System.ComponentModel; using System.Drawing; using System.Drawing.Drawing2D; using System.Windows.Forms; namespace GroupPanelTest { public class GroupPanel : GroupBox { #region Members private const Int32 iHeaderHeight = 20; private Int32 iFullHeight = 200; private Boolean bClosed = false; private Rectangle rectCloseGlyphBounds = Rectangle.Empty; private Boolean bIsMoveOverCloseGlyph = false; #endregion #region Properties [DefaultValue(false)] public Boolean Closed { get { return (this.bClosed); } set { if (this.bClosed != value) { this.bClosed = value; if (this.bClosed) { this.iFullHeight = base.Height; base.Height = GroupPanel.iHeaderHeight; } else { base.Height = this.iFullHeight; } foreach (Control con in base.Controls) con.Visible = !this.bClosed; this.Invalidate(); } } } public new Int32 Height { get { return (base.Height); } set { if (value != base.Height) { if (this.Closed) { this.iFullHeight = value; } else { Int32 iOldHeight = base.Height; base.Height = value; } } } } [DefaultValue(typeof(Size), "350,200")] public new Size Size { get { return (base.Size); } set { if (base.Size != value) { base.Size = value; if (!this.Closed) this.iFullHeight = value.Height; } } } [DefaultValue(typeof(Padding), "0,7,0,0")] public new Padding Padding { get { return (base.Padding); } set { base.Padding = value; } } #endregion #region Construction public GroupPanel () { SetStyle(ControlStyles.UserPaint, true); SetStyle(ControlStyles.ResizeRedraw, true); SetStyle(ControlStyles.AllPaintingInWmPaint, true); SetStyle(ControlStyles.OptimizedDoubleBuffer, true); SetStyle(ControlStyles.Selectable, true); this.Size = new Size(350, 200); this.Padding = new Padding(0, 7, 0, 0); // the groupbox will add to that this.rectCloseGlyphBounds = new Rectangle(base.ClientSize.Width - 24, 2, 16, 16); } #endregion #region Overrides protected override void OnSizeChanged (EventArgs e) { this.rectCloseGlyphBounds = new Rectangle(base.ClientSize.Width - 24, 2, 16, 16); base.OnSizeChanged(e); } protected override void OnPaint (PaintEventArgs e) { base.OnPaint(e); // we want all the delegates to receive the events, but we do this first so we can paint over it Graphics g = e.Graphics; g.FillRectangle(SystemBrushes.Window, this.ClientRectangle); Rectangle rectTitle = new Rectangle(0, 0, this.ClientRectangle.Width, GroupPanel.iHeaderHeight); g.FillRectangle(SystemBrushes.Control, rectTitle); g.DrawString(this.Text, this.Font, SystemBrushes.ControlText, new PointF(5.0f, 3.0f)); if (this.bIsMoveOverCloseGlyph) { g.FillRectangle(SystemBrushes.ButtonHighlight, this.rectCloseGlyphBounds); Rectangle rectBorder = this.rectCloseGlyphBounds; rectBorder.Inflate(-1, -1); g.DrawRectangle(SystemPens.Highlight, rectBorder); } using (Pen pen = new Pen(SystemColors.ControlText, 1.6f)) { if (this.Closed) { g.DrawLine(pen, this.rectCloseGlyphBounds.Left + 3, this.rectCloseGlyphBounds.Top + 3, this.rectCloseGlyphBounds.Left + 8, this.rectCloseGlyphBounds.Top + 8); g.DrawLine(pen, this.rectCloseGlyphBounds.Left + 13, this.rectCloseGlyphBounds.Top + 3, this.rectCloseGlyphBounds.Left + 8, this.rectCloseGlyphBounds.Top + 8); g.DrawLine(pen, this.rectCloseGlyphBounds.Left + 3, this.rectCloseGlyphBounds.Top + 7, this.rectCloseGlyphBounds.Left + 8, this.rectCloseGlyphBounds.Top + 12); g.DrawLine(pen, this.rectCloseGlyphBounds.Left + 13, this.rectCloseGlyphBounds.Top + 7, this.rectCloseGlyphBounds.Left + 8, this.rectCloseGlyphBounds.Top + 12); } else { g.DrawLine(pen, this.rectCloseGlyphBounds.Left + 3, this.rectCloseGlyphBounds.Top + 8, this.rectCloseGlyphBounds.Left + 8, this.rectCloseGlyphBounds.Top + 3); g.DrawLine(pen, this.rectCloseGlyphBounds.Left + 13, this.rectCloseGlyphBounds.Top + 8, this.rectCloseGlyphBounds.Left + 8, this.rectCloseGlyphBounds.Top + 3); g.DrawLine(pen, this.rectCloseGlyphBounds.Left + 3, this.rectCloseGlyphBounds.Top + 12, this.rectCloseGlyphBounds.Left + 8, this.rectCloseGlyphBounds.Top + 7); g.DrawLine(pen, this.rectCloseGlyphBounds.Left + 13, this.rectCloseGlyphBounds.Top + 12, this.rectCloseGlyphBounds.Left + 8, this.rectCloseGlyphBounds.Top + 7); } } } protected override void OnMouseDown (MouseEventArgs e) { if (e.Button == MouseButtons.Left && this.rectCloseGlyphBounds.Contains(e.Location)) this.Closed = !this.Closed; // close will call invalidate base.OnMouseDown(e); } protected override void OnMouseMove (MouseEventArgs e) { this.bIsMoveOverCloseGlyph = this.rectCloseGlyphBounds.Contains(e.Location); this.Invalidate(this.rectCloseGlyphBounds); base.OnMouseMove(e); } #endregion } }

    Read the article

  • Adding two different Objects by overloading operator+ C++

    - by lampshade
    Hello, I've been trying to figure out how to add a private member from Object A, to a private member from Object B. Both Cat and Dog Class's inheriate from the base class Animal. I have a thrid class 'MyClass', that I want to inheriate the private members of the Cat and Dog class. So in MyClass, I have a friend function to overload the + operator. THe friend function is defined as follows: MyClass operator+(const Dog &dObj, const Cat &cObj); I want to access dObj.age and cObj.age within the above function, invoke by this statement in main: mObj = dObj + cObj; Here is the entire source for a complete reference into the class objects: #include <iostream> #include <vld.h> using namespace std; class Animal { public : Animal() {}; virtual void eat() = 0 {}; virtual void walk() = 0 {}; }; class Dog : public Animal { public : Dog(const char * name, const char * gender, int age); Dog() : name(NULL), gender(NULL), age(0) {}; virtual ~Dog(); void eat(); void bark(); void walk(); private : char * name; char * gender; int age; }; class Cat : public Animal { public : Cat(const char * name, const char * gender, int age); Cat() : name(NULL), gender(NULL), age(0) {}; virtual ~Cat(); void eat(); void meow(); void walk(); private : char * name; char * gender; int age; }; class MyClass : private Cat, private Dog { public : MyClass() : action(NULL) {}; void setInstance(Animal &newInstance); void doSomething(); friend MyClass operator+(const Dog &dObj, const Cat &cObj); private : Animal * action; }; Cat::Cat(const char * name, const char * gender, int age) : name(new char[strlen(name)+1]), gender(new char[strlen(gender)+1]), age(age) { if (name) { size_t length = strlen(name) +1; strcpy_s(this->name, length, name); } else name = NULL; if (gender) { size_t length = strlen(gender) +1; strcpy_s(this->gender, length, gender); } else gender = NULL; if (age) { this->age = age; } } Cat::~Cat() { delete name; delete gender; age = 0; } void Cat::walk() { cout << name << " is walking now.. " << endl; } void Cat::eat() { cout << name << " is eating now.. " << endl; } void Cat::meow() { cout << name << " says meow.. " << endl; } Dog::Dog(const char * name, const char * gender, int age) : name(new char[strlen(name)+1]), gender(new char[strlen(gender)+1]), age(age) { if (name) { size_t length = strlen(name) +1; strcpy_s(this->name, length, name); } else name = NULL; if (gender) { size_t length = strlen(gender) +1; strcpy_s(this->gender, length, gender); } else gender = NULL; if (age) { this->age = age; } } Dog::~Dog() { delete name; delete gender; age = 0; } void Dog::eat() { cout << name << " is eating now.. " << endl; } void Dog::bark() { cout << name << " says woof.. " << endl; } void Dog::walk() { cout << name << " is walking now.." << endl; } void MyClass::setInstance(Animal &newInstance) { action = &newInstance; } void MyClass::doSomething() { action->walk(); action->eat(); } MyClass operator+(const Dog &dObj, const Cat &cObj) { MyClass A; //dObj.age; //cObj.age; return A; } int main() { MyClass mObj; Dog dObj("B", "Male", 4); Cat cObj("C", "Female", 5); mObj.setInstance(dObj); // set the instance specific to the object. mObj.doSomething(); // something happens based on which object is passed in dObj.bark(); mObj.setInstance(cObj); mObj.doSomething(); cObj.meow(); mObj = dObj + cObj; return 0; }

    Read the article

  • login form whith java/sqlite

    - by tuxou
    hi I would like to create a login form for my application with the possibility to add or remove users for an sqlite database, i have created the table users(nam, pass) but i can't unclud it in my login form, it someone could help me this is my login code: import java.awt.*; import java.awt.event.*; import javax.swing.*; public class login extends JFrame { // Variables declaration private JLabel jLabel1; private JLabel jLabel2; private JTextField jTextField1; private JPasswordField jPasswordField1; private JButton jButton1; private JPanel contentPane; // End of variables declaration public login() { super(); create(); this.setVisible(true); } private void create() { jLabel1 = new JLabel(); jLabel2 = new JLabel(); jTextField1 = new JTextField(); jPasswordField1 = new JPasswordField(); jButton1 = new JButton(); contentPane = (JPanel)this.getContentPane(); // // jLabel1 // jLabel1.setHorizontalAlignment(SwingConstants.LEFT); jLabel1.setForeground(new Color(0, 0, 255)); jLabel1.setText("username:"); // // jLabel2 // jLabel2.setHorizontalAlignment(SwingConstants.LEFT); jLabel2.setForeground(new Color(0, 0, 255)); jLabel2.setText("password:"); // // jTextField1 // jTextField1.setForeground(new Color(0, 0, 255)); jTextField1.setSelectedTextColor(new Color(0, 0, 255)); jTextField1.setToolTipText("Enter your username"); jTextField1.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { jTextField1_actionPerformed(e); } }); // // jPasswordField1 // jPasswordField1.setForeground(new Color(0, 0, 255)); jPasswordField1.setToolTipText("Enter your password"); jPasswordField1.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { jPasswordField1_actionPerformed(e); } }); // // jButton1 // jButton1.setBackground(new Color(204, 204, 204)); jButton1.setForeground(new Color(0, 0, 255)); jButton1.setText("Login"); jButton1.addActionListener(new ActionListener() { public void actionPerformed(ActionEvent e) { jButton1_actionPerformed(e); } }); // // contentPane // contentPane.setLayout(null); contentPane.setBorder(BorderFactory.createEtchedBorder()); contentPane.setBackground(new Color(204, 204, 204)); addComponent(contentPane, jLabel1, 5,10,106,18); addComponent(contentPane, jLabel2, 5,47,97,18); addComponent(contentPane, jTextField1, 110,10,183,22); addComponent(contentPane, jPasswordField1, 110,45,183,22); addComponent(contentPane, jButton1, 150,75,83,28); // // login // this.setTitle("Login To Members Area"); this.setLocation(new Point(76, 182)); this.setSize(new Dimension(335, 141)); this.setDefaultCloseOperation(WindowConstants.EXIT_ON_CLOSE); this.setResizable(false); } /** Add Component Without a Layout Manager (Absolute Positioning) */ private void addComponent(Container container,Component c,int x,int y,int width,int height) { c.setBounds(x,y,width,height); container.add(c); } private void jTextField1_actionPerformed(ActionEvent e) { } private void jPasswordField1_actionPerformed(ActionEvent e) { } private void jButton1_actionPerformed(ActionEvent e) { System.out.println("\njButton1_actionPerformed(ActionEvent e) called."); String username = new String(jTextField1.getText()); String password = new String(jPasswordField1.getText()); if(username.equals("") || password.equals("")) // If password and username is empty Do this { jButton1.setEnabled(false); JLabel errorFields = new JLabel("You must enter a username and password to login."); JOptionPane.showMessageDialog(null,errorFields); jTextField1.setText(""); jPasswordField1.setText(""); jButton1.setEnabled(true); this.setVisible(true); } else { JLabel optionLabel = new JLabel("You entered "+username+" as your username. Is this correct?"); int confirm =JOptionPane.showConfirmDialog(null,optionLabel); switch(confirm){ // Switch Case case JOptionPane.YES_OPTION: // Attempt to Login user jButton1.setEnabled(false); // Set button enable to false to prevent 2 login attempts break; case JOptionPane.NO_OPTION: // No Case.(Go back. Set text to 0) jButton1.setEnabled(false); jTextField1.setText(""); jPasswordField1.setText(""); jButton1.setEnabled(true); break; case JOptionPane.CANCEL_OPTION: // Cancel Case.(Go back. Set text to 0) jButton1.setEnabled(false); jTextField1.setText(""); jPasswordField1.setText(""); jButton1.setEnabled(true); break; } // End Switch Case } } public static void main(String[] args) { JFrame.setDefaultLookAndFeelDecorated(true); JDialog.setDefaultLookAndFeelDecorated(true); try { UIManager.setLookAndFeel("com.sun.java.swing.plaf.windows.WindowsLookAndFeel"); } catch (Exception ex) { System.out.println("Failed loading L&F: "); System.out.println(ex); } new login(); }; }

    Read the article

  • PHP - My array returns NULL values when placed in a function, but works fine outside of the function

    - by orbit82
    Okay, let me see if I can explain this. I am making a newspaper WordPress theme. The theme pulls posts from categories. The front page shows multiple categories, organized as "newsboxes". Each post should show up only ONCE on the front page, even if said post is in two or more categories. To prevent posts from duplicating on the front page, I've created an array that keeps track of the individual post IDs. When a post FIRST shows up on the front page, its ID gets added to the array. Before looping through the posts for each category, the code first checks the array to see which posts have ALREADY been displayed. OK, so now remember how I said earlier that the front page shows multiple categories organized as "newsboxes"? Well, these newsboxes are called onto the front page using PHP includes. I have 6 newsboxes appearing on the front page, and the code to call them is EXACTLY the same. I didn't want to repeat the same code 6 times, so I put all of the inclusion code into a function. The function works, but the only problem is that it screws up the duplicate posts code I mentioned earlier. The posts all repeat. Running a var_dump on the $do_not_duplicate variable returns an array with null indices. Everything works PERFECTLY if I don't put the code inside a function, but once I do put them in a function it's like the arrays aren't even connecting with the posts. Here is the code with the arrays. The key variables in question here include $do_not_duplicate[] = $post-ID, $do_not_duplicate and 'post__not_in' = $do_not_duplicate <?php query_posts('cat='.$settings['cpress_top_story_category'].'&posts_per_page='.$settings['cpress_number_of_top_stories'].'');?> <?php if (have_posts()) : ?> <!--TOP STORY--> <div id="topStory"> <?php while ( have_posts() ) : the_post(); $do_not_duplicate[] = $post->ID; ?> <a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"><?php the_post_thumbnail('top-story-thumbnail'); ?></a> <h2 class="extraLargeHeadline"><a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"><?php the_title(); ?></a></h2> <div class="topStory_author"><?php cpress_show_post_author_byline(); ?></div> <div <?php post_class('topStory_entry') ?> id="post-<?php the_ID(); ?>"> <?php if($settings['cpress_excerpt_or_content_top_story_newsbox'] == "content") { the_content(); ?><a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php } else { the_excerpt(); ?><a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php }?> </div><!--/topStoryentry--> <div class="topStory_meta"><?php cpress_show_post_meta(); ?></div> <?php endwhile; wp_reset_query(); ?> <?php if(!$settings['cpress_hide_top_story_more_stories']) { ?> <!--More Top Stories--><div id="moreTopStories"> <?php $category_link = get_category_link(''.$settings['cpress_top_story_category'].''); ?> <?php if (have_posts()) : ?> <?php query_posts( array( 'cat' => ''.$settings['cpress_top_story_category'].'', 'posts_per_page' => ''.$settings['cpress_number_of_more_top_stories'].'', 'post__not_in' => $do_not_duplicate ) ); ?> <h4 class="moreStories"> <?php if($settings['cpress_make_top_story_more_stories_link']) { ?> <a href="<?php echo $category_link; ?>" title="<?php echo strip_tags($settings['cpress_top_story_more_stories_text']);?>"><?php echo strip_tags($settings['cpress_top_story_more_stories_text']);?></a><?php } else { echo strip_tags($settings['cpress_top_story_more_stories_text']); } ?> </h4> <ul> <?php while( have_posts() ) : the_post(); $do_not_duplicate[] = $post->ID; ?> <li><h2 class="mediumHeadline"><a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"><?php the_title(); ?></a></h2> <?php if(!$settings['cpress_hide_more_top_stories_excerpt']) { ?> <div <?php post_class('moreTopStory_postExcerpt') ?> id="post-<?php the_ID(); ?>"><?php if($settings['cpress_excerpt_or_content_top_story_newsbox'] == "content") { the_content(); ?><a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php } else { the_excerpt(); ?> <a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php }?> </div><?php } ?> <div class="moreTopStory_postMeta"><?php cpress_show_post_meta(); ?></div> </li> <?php endwhile; wp_reset_query(); ?> </ul> <?php endif;?> </div><!--/moreTopStories--> <?php } ?> <?php echo(var_dump($do_not_duplicate)); ?> </div><!--/TOP STORY--> <?php endif; ?> And here is the code that includes the newsboxes onto the front page. This is the code I'm trying to put into a function to avoid duplicating it 6 times on one page. function cpress_show_templatebitsf($tbit_num, $tbit_option) { global $tbit_path; global $shortname; $settings = get_option($shortname.'_options'); //display the templatebits (usually these will be sidebars) for ($i=1; $i<=$tbit_num; $i++) { $tbit = strip_tags($settings[$tbit_option .$i]); if($tbit !="") { include_once(TEMPLATEPATH . $tbit_path. $tbit.'.php'); } //if }//for loop unset($tbit_option); } I hope this makes sense. It's kind of a complex thing to explain but I've tried many things to fix it and have had no luck. I'm stumped. I'm hoping it's just some little thing I'm overlooking because it seems like it shouldn't be such a problem.

    Read the article

  • View bound to paged collection view not updating all of the time.

    - by Thomas
    I new to silverlight and trying to make a business application using the mvvm pattern and ria services. I have a view model class that contains a PagedCollectoinView and it is set to the item source of a datagrid. When I update the PagedCollectionView the datagrid is only updated the first time then after that subsequent changes to the data to not reflect in the view until after another edit. Things seem to be delayed one edit. Below is a summarized example of my xaml and code behind. This is the code for my view model public class CustomerContactLinks : INotifyPropertyChanged { private ObservableCollection<CustomerContactLink> _CustomerContact; public ObservableCollection<CustomerContactLink> CustomerContact { get { if (_CustomerContact == null) _CustomerContact = new ObservableCollection<CustomerContactLink>(); return _CustomerContact; } set { _CustomerContact = value; } } private PagedCollectionView _CustomerContactPaged; public PagedCollectionView CustomerContactPaged { get { if (_CustomerContactPaged == null) _CustomerContactPaged = new PagedCollectionView(CustomerContact); return _CustomerContactPaged; } } private TicketSystemDataContext _ctx; public TicketSystemDataContext ctx { get { if (_ctx == null) _ctx = new TicketSystemDataContext(); return _ctx; } } public void GetAll() { ctx.Load(ctx.GetCustomerContactInfoQuery(), LoadCustomerContactsComplete, null); } private void LoadCustomerContactsComplete(LoadOperation<CustomerContactLink> lo) { foreach (var entity in lo.Entities) { CustomerContact.Add(entity as CustomerContactLink); } } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; private void RaisePropertyChanged(string propertyName) { if (PropertyChanged != null) { this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName)); } } #endregion } Here is the basics of my XAML <Data:DataGrid x:Name="GridCustomers" MinHeight="100" MaxWidth="1000" IsReadOnly="True" AutoGenerateColumns="False"> <Data:DataGrid.Columns> <Data:DataGridTextColumn Header="First Name" Binding="{Binding Customer.FirstName}" Width="105" /> <Data:DataGridTextColumn Header="MI" Binding="{Binding Customer.MiddleName}" Width="35" /> <Data:DataGridTextColumn Header="Last Name" Binding="{Binding Customer.LastName}" Width="105"/> <Data:DataGridTextColumn Header="Address1" Binding="{Binding Contact.Address1}" Width="130"/> <Data:DataGridTextColumn Header="Address2" Binding="{Binding Contact.Address2}" Width="130"/> <Data:DataGridTextColumn Header="City" Binding="{Binding Contact.City}" Width="110"/> <Data:DataGridTextColumn Header="State" Binding="{Binding Contact.State}" Width="50"/> <Data:DataGridTextColumn Header="Zip" Binding="{Binding Contact.Zip}" Width="45"/> <Data:DataGridTextColumn Header="Home" Binding="{Binding Contact.PhoneHome}" Width="85"/> <Data:DataGridTextColumn Header="Cell" Binding="{Binding Contact.PhoneCell}" Width="85"/> <Data:DataGridTextColumn Header="Email" Binding="{Binding Contact.Email}" Width="118"/> </Data:DataGrid.Columns> </Data:DataGrid> <DataForm:DataForm x:Name="CustomerDetails" Header="Customer Details" AutoGenerateFields="False" AutoEdit="False" AutoCommit="False" CommandButtonsVisibility="Edit" Width="1000" Margin="0,5,0,0"> <DataForm:DataForm.EditTemplate> </DataForm:DataForm.EditTemplate> </DataForm:DataForm> And here is my code behind public Customers() { InitializeComponent(); BusyDialogIndicator.IsBusy = true; Loaded += new RoutedEventHandler(Customers_Loaded); CustomerDetails.BeginningEdit += new EventHandler(CustomerDetails_BeginningEdit); } void CustomerDetails_BeginningEdit(object sender, System.ComponentModel.CancelEventArgs e) { CustomerContacts.CustomerContactPaged.EditItem(CustomerDetails.CurrentItem); } private void Customers_Loaded(object sender, RoutedEventArgs e) { CustomerContacts = new CustomerContactLinks(); CustomerContacts.GetAll(); GridCustomers.ItemsSource = CustomerContacts.CustomerContactPaged; GridCustomerPager.Source = CustomerContacts.CustomerContactPaged; GridCustomers.SelectionChanged += new SelectionChangedEventHandler(GridCustomers_SelectionChanged); BusyDialogIndicator.IsBusy = false; } void GridCustomers_SelectionChanged(object sender, SelectionChangedEventArgs e) { CustomerDetails.CurrentItem = GridCustomers.SelectedItem as CustomerContactLink; } private void SaveChanges_Click(object sender, RoutedEventArgs e) { if (WebContext.Current.User.IsAuthenticated) { bool commited = CustomerDetails.CommitEdit(); if (commited && (!CustomerDetails.IsItemChanged && CustomerDetails.IsItemValid)) { CustomerContacts.Update(CustomerDetails.CurrentItem as CustomerContactLink); CustomerContacts.ctx.SubmitChanges(); CustomerContacts.CustomerContactPaged.CommitEdit(); CustomerContacts.CustomerContactPaged.Refresh(); (GridCustomers.ItemsSource as PagedCollectionView).Refresh(); } } }

    Read the article

  • std::vector optimisation required

    - by marcp
    I've written a routine that uses std::vector<double> rather heavily. It runs rather slowly and AQTime seems to imply that I am constructing mountains of vectors but I'm not sure why I would be. For some context, my sample run iterates 10 times. Each iteration copies 3 c arrays of ~400 points into vectors and creates 3 new same sized vectors for output. Each output point might be the result of summing up to 20 points from 2 of the input vectors, which works out to a worst case of 10*400*3*2*20 = 480,000 dereferences. Incredibly the profiler indicates that some of the std:: methods are being called 46 MILLION times. I suspect I'm doing something wrong! Some code: vector<double>gdbChannel::GetVector() { if (fHaveDoubleData & (fLength > 0)) { double * pD = getDoublePointer(); vector<double>v(pD, pD + fLength); return v; } else { throw(Exception("attempt to retrieve vector on empty line")); ; } } void gdbChannel::SaveVector(GX_HANDLE _hLine, const vector<double> & V) { if (hLine != _hLine) { GetLine(_hLine, V.size(), true); } GX_DOUBLE * pData = getDoublePointer(); memcpy(pData, &V[0], V.size()*sizeof(V[0])); ReplaceData(); } ///This routine gets called 10 times bool SpecRatio::DoWork(GX_HANDLE_PTR pLine) { if (!(hKin.GetLine(*pLine, true) && hUin.GetLine(*pLine, true) && hTHin.GetLine(*pLine, true))) { return true; } vector<double>vK = hKin.GetVector(); vector<double>vU = hUin.GetVector(); vector<double>vTh = hTHin.GetVector(); if ((vK.size() == 0) || (vU.size() == 0) || (vTh.size() == 0)) { return true; } ///TODO: confirm all vectors the same lenghth len = vK.size(); vUK.clear(); // these 3 vectors are declared as private class members vUTh.clear(); vThK.clear(); vUK.reserve(len); vUTh.reserve(len); vThK.reserve(len); // TODO: ensure everything is same fidincr, fidstart and length for (int i = 0; i < len; i++) { if (vK.at(i) < MinK) { vUK.push_back(rDUMMY); vUTh.push_back(rDUMMY); vThK.push_back(rDUMMY); } else { vUK.push_back(RatioPoint(vU, vK, i, UMin, KMin)); vUTh.push_back(RatioPoint(vU, vTh, i, UMin, ThMin)); vThK.push_back(RatioPoint(vTh, vK, i, ThMin, KMin)); } } hUKout.setFidParams(hKin); hUKout.SaveVector(*pLine, vUK); hUTHout.setFidParams(hKin); hUTHout.SaveVector(*pLine, vUTh); hTHKout.setFidParams(hKin); hTHKout.SaveVector(*pLine, vThK); return TestError(); } double SpecRatio::VValue(vector<double>V, int Index) { double result; if ((Index < 0) || (Index >= len)) { result = 0; } else { try { result = V.at(Index); if (OasisUtils::isDummy(result)) { result = 0; } } catch (out_of_range) { result = 0; } } return result; } double SpecRatio::RatioPoint(vector<double>Num, vector<double>Denom, int Index, double NumMin, double DenomMin) { double num = VValue(Num, Index); double denom = VValue(Denom, Index); int s = 0; // Search equalled 10 in this case while (((num < NumMin) || (denom < DenomMin)) && (s < Search)) { num += VValue(Num, Index - s) + VValue(Num, Index + s); denom += VValue(Denom, Index - s) + VValue(Denom, Index + s); s++; } if ((num < NumMin) || (denom < DenomMin)) { return rDUMMY; } else { return num / denom; } } The top AQTime offenders are: std::_Uninit_copy , double *, std::allocator 3.65 secs and 115731 Hits std::_Construct 1.69 secs and 46450637 Hits std::_Vector_const_iterator ::operator !=1.66 secs and 46566395 Hits and so on... std::allocator<double>::construct, operator new, std::_Vector_const_iterator<double, std::allocator<double> >::operator ++, std::_Vector_const_iterator<double, std::allocator<double> >::operator * std::_Vector_const_iterator<double, std::allocator<double> >::operator == each get called over 46 million times. I'm obviously doing something wrong to cause all these objects to be created. Can anyone see my error(s)?

    Read the article

  • jQuery works in FF but not in Safari

    - by Hristo
    I have some event handlers that work in FF and not in Safari. Simply put, I have a list of friends, some hard-coded, some pulled in from a database. Clicking on a buddy opens a chat window... this is much like the Facebook chat system. So in Firefox, everything works normally and as expected. In Safari, clicking on buddies that are hard-coded works fine, but clicking on buddies that are pulled in from the database doesn't pull up the chat window. <script type="text/javascript" src="js/jQuery.js"></script> <script type="text/javascript" src="js/chat.js"></script> <script type="text/javascript" src="js/ChatBar.js"></script> <script type="text/javascript" src="js/settings.js"></script> <script type="text/javascript"> var chat = new Chat(); var from = <?php echo "'" .$_SESSION['userid'] . "'"; ?>; chat.getUsers(<?php echo "'" .$_SESSION['userid'] . "'"; ?>); </script> So I load all my buddies with chat.getUsers. That function is: // get list of friends function getBuddyList(userName) { userNameID = userName; $.ajax({ type: "GET", url: "buddyList.php", data: { 'userName': userName, 'current': numOfUsers }, dataType: "json", cache: false, success: function(data) { if (numOfUsers != data.numOfUsers) { numOfUsers = data.numOfUsers; var list = "<li><span>Agents</span></li>"; for (var i = 0; i < data.friendlist.length; i++) { list += "<li><a class=\"buddy\" href=\"#\"><img alt=\"\" src=\"images/chat-thumb.gif\">"+ data.friendlist[i] +"</a></li>"; } $('#friend-list ul').append($(list)); } setTimeout('getBuddyList(userNameID)', 1000); } }); } buddyList.php just pulls in the Users from the database and returns an array with the user names. So the jQuery for clicking a buddy is: // click on buddy in #friends-panel $('#friends-panel a.buddy').click(function() { alert("Loaded"); // close #friends-panel $('.subpanel').hide(); $('#friends-panel a.chat').removeClass('active'); // if a chat window is already active, close it and deactivate $('#mainpanel li[class="active-buddy-tab"] div').not('#chat-box').removeAttr('id'); $('#mainpanel li[class="active-buddy-tab"]').removeClass('active-buddy-tab').addClass('buddy-tab'); // create active buddy chat window $('#mainpanel').append('<li class="active-buddy-tab"><a class="buddy-tab" href="#"></a><div id="chat-window"><h3><p id="to"></p></h3></div></li>'); // create name and close/minimize buttons $('.active-buddy-tab div h3 p#to').text($(this).text()); $('.active-buddy-tab div h3').append('<span class="close"> X </span><span class="minimize"> &ndash; </span>'); $('.active-buddy-tab').append('<span class="close"> X </span>'); // create chat area $('.active-buddy-tab div').append('<div id="chat-box"></div><form id="chat-message"><textarea id="message" maxlength="100"></textarea></form>'); // put curser in chat window $('.active-buddy-tab #message').focus(); // create a chat relationship return false; }); ... and the basic structure of the HTML is: <div id="footpanel"> <ul id="mainpanel"> <li id="friends-panel"> <a href="#" class="chat">Friends (<strong>18</strong>) </a> <div id="friend-list" class="subpanel"> <h3><span> &ndash; </span>Friends Online</h3> <ul> <li><span>Family Members</span></li> <!-- Hard coded buddies --> <li><a href="#" class="buddy"><img src="images/chat-thumb.gif" alt="" /> Your Friend 1</a></li> <li><a href="#" class="buddy"><img src="images/chat-thumb.gif" alt="" /> Your Friend </a></li> <!-- buddies will be added in dynamically here --> </ul> </div> </li> </ul> </div> I'm not too sure where to begin solving this issue. I thought it might be a rendering bug or something with the DOM but I've been staring at this code all day and I'm stuck. Any ideas on why it works in FF and not in Safari? btw... I'm testing on Snow Leopard. Thanks, Hristo

    Read the article

  • Trying to add data to sql from link click and return results via jquery or ajax

    - by Jay Schires
    I am not familiar with jquery or ajax, but i do know it is whats needed to perform the action I want. I have created a wordpress plugin that updates a database table based on the users click. Right now it refreshes the page to return the results, but I want to stop the page refresh and return data via ajax I believe. If anyone is interested in helping me figure this out I would be very appreciative or even willing to pay. Thanks! Here is the plugin code: function BoardLikeItGetDelim($postid) { global $wp_rewrite; if($wp_rewrite->using_permalinks()) { if(isset($_GET['mbpost'])) return "?mbpost=".$postid."&"; return "?"; } else { if(isset($_GET['mbpost'])) return "&mbpost=".$postid."&"; return "&"; } } function AddBoardLikeItButton($postid) { global $user_ID; if(isset($_GET['board-like-it-action']) && $_GET['board-like-it-action'] == "like" && $_GET['bpid'] == $postid) BoardLikeItLike($user_ID, $_GET['bpid']); if(isset($_GET['board-like-it-action']) && $_GET['board-like-it-action'] == "unlike" && $_GET['bpid'] == $postid) BoardLikeItUnLike($user_ID, $_GET['bpid']); $num_likes = BoardLikeItGetNumLikes($postid); if(!BoardLikeItIsLiked($user_ID, $postid)) echo "<HREF LINK='".BoardLikeItGetDelim($postid)."board-like-it-action=like&bpid=".$postid."#mngl-board-post-message-".$postid."'>Like</a> ".$num_likes."" . "<br/>"; else echo "<HREF LINK ='".BoardLikeItGetDelim($postid)."board-like-it-action=unlike&bpid=".$postid."#mngl-board-post-message-".$postid."'>Un-Like</a> " . "<br/><span style='display: inline-block; padding: 0px; bottom: -5px; position: relative; border: 0px;'><IMAGE='". get_bloginfo('wpurl')."/wp-content/plugins/board-like-it/top-up.png' /></span><div style='-moz-border-radius: 4px; -khtml-border-radius: 4px; -webkit-border-radius: 4px; font-family: Verdana, Geneva, sans-serif; font-size: 10px; color: #000; background-color: #B8C9DB; width: 90%; margin: 0px; display: block; padding-top: 4px; padding-right: 5px; padding-bottom: 4px; padding-left: 6px;'>" . "<IMAGE='". get_bloginfo('wpurl')."/wp-content/plugins/board-like-it/thumb_up.png'/> " .BoardLikeItShowLikers($postid). "like this." . "</div>"; } function BoardLikeItShowLikers($postid) { global $wpdb; $result = $wpdb->get_var($wpdb->prepare("SELECT `likers` FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); $results = explode(',', $result); $names = ""; if($results[0] != "") foreach($results as $r) { $userinfo = get_usermeta($r, 'user_login'); $names .= $userinfo.", "; } return $names; } function BoardLikeItGetNumLikes($postid) { global $wpdb; $result = $wpdb->get_var($wpdb->prepare("SELECT `likers` FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); $results = explode(',', $result); if($results[0] != '') return count($results)."<br/><span style='display: inline-block; padding: 0px; bottom: -5px; position: relative; border: 0px;'><IMAGE='". get_bloginfo('wpurl')."/wp-content/plugins/board-like-it/top-up.png' /></span><div style='-moz-border-radius: 4px; -khtml-border-radius: 4px; -webkit-border-radius: 4px; font-family: Verdana, Geneva, sans-serif; font-size: 10px; color: #000; background-color: #B8C9DB; width: 90%; margin: 0px; display: inline-block; border: 0px; padding-top: 0px; padding-right: 5px; padding-bottom: 1px; padding-left: 6px;'>" . "<IMAGE='". get_bloginfo('wpurl')."/wp-content/plugins/board-like-it/thumb_up.png'/> " .BoardLikeItShowLikers($postid). "likes this." . "</div>"; else return ""; } function BoardLikeItLike($user_ID, $postid) { global $wpdb; $likers = array(); $likersnew = array(); $result = $wpdb->get_var($wpdb->prepare("SELECT `likers` FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); $results = explode(',',$result); if($results[0] != "") { if(!in_array($user_ID, $results)) $results[] = $user_ID; $likers = implode(',',$results); $wpdb->query($wpdb->prepare("UPDATE ".BoardLikeItGetDBName()." SET `likers` = '{$likers}' WHERE `mngl_id` = {$postid}")); } else { $likersnew[] = $user_ID; $likersnew = implode(',',$likersnew); $wpdb->query($wpdb->prepare("INSERT INTO ".BoardLikeItGetDBName()." (`mngl_id`, `likers`) VALUES ('{$postid}', '{$likersnew}')")); } } function BoardLikeItUnLike($user_ID, $postid) { global $wpdb; $likers = array(); $result = $wpdb->get_var($wpdb->prepare("SELECT `likers` FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); $results = explode(',', $result); if(in_array($user_ID, $results)) { $results = BoardLikeItRemoveFromArray($results, $user_ID); if(!empty($results)) { $likers = implode(',', $results); $wpdb->query($wpdb->prepare("UPDATE ".BoardLikeItGetDBName()." SET `likers` = '{$likers}' WHERE `mngl_id` = {$postid}")); } else { $wpdb->query($wpdb->prepare("DELETE FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); } } } function BoardLikeItIsLiked($user_ID, $postid) { global $wpdb; $result = $wpdb->get_var($wpdb->prepare("SELECT `likers` FROM ".BoardLikeItGetDBName()." WHERE `mngl_id` = {$postid}")); $results = explode(',', $result); if(in_array($user_ID, $results)) return true; else return false; } function BoardLikeItActivate() { global $wpdb; $charset_collate = ''; if($wpdb->has_cap('collation')) { if(!empty($wpdb->charset)) $charset_collate = "DEFAULT CHARACTER SET $wpdb->charset"; if(!empty($wpdb->collate)) $charset_collate .= " COLLATE $wpdb->collate"; } $table_sql = "CREATE TABLE ".BoardLikeItGetDBName()."( `mngl_id` int(11) NOT NULL, `likers` longtext NOT NULL, PRIMARY KEY (`mngl_id`)) {$charset_collate};"; require_once(ABSPATH.'wp-admin/includes/upgrade.php'); dbDelta($table_sql); } function BoardLikeItGetDBName() { global $wpdb; return $wpdb->prefix."board_like_it"; } function BoardLikeItRemoveFromArray($arr, $key) { $new = array(); foreach($arr as $j => $i) { if($i != $key) $new[] = $i; } return $new; }

    Read the article

  • How do I update mysql database when posting form without using hidden inputs?

    - by user1322707
    I have a "members" table in mysql which has approximately 200 field names. Each user is given up to 7 website templates with 26 different values they can insert unique data into for each template. Each time they create a template, they post the form with the 26 associated values. These 26 field names are the same for each template, but are differentiated by an integer at the end, ie _1, _2, ... _7. In the form submitting the template, I have a variable called $pid_sum which is inserted at the end of each field name to identify which template they are creating. For instance: <form method='post' action='create.template.php'> <input type='hidden' name='address_1' value='address_1'> <input type='hidden' name='city_1' value='city_1'> <input type='hidden' name='state_1' value='state_1'> etc... <input type='hidden' name='address_1' value='address_2'> <input type='hidden' name='city_1' value='city_2'> <input type='hidden' name='state_1' value='state_2'> etc... <input type='hidden' name='address_2' value='address_3'> <input type='hidden' name='city_2' value='city_3'> <input type='hidden' name='state_2' value='state_3'> etc... <input type='hidden' name='address_2' value='address_4'> <input type='hidden' name='city_2' value='city_4'> <input type='hidden' name='state_2' value='state_4'> etc... <input type='hidden' name='address_2' value='address_5'> <input type='hidden' name='city_2' value='city_5'> <input type='hidden' name='state_2' value='state_5'> etc... <input type='hidden' name='address_2' value='address_6'> <input type='hidden' name='city_2' value='city_6'> <input type='hidden' name='state_2' value='state_6'> etc... <input type='hidden' name='address_2' value='address_7'> <input type='hidden' name='city_2' value='city_7'> <input type='hidden' name='state_2' value='state_7'> etc... // Visible form user fills out in creating their template ($pid_sum converts // into an integer 1-7, depending on what template they are filling out) <input type='' name='address_$pid_sum'> <input type='' name='city_$pid_sum'> <input type='' name='state_$pid_sum'> etc... <input type='submit' name='save_button' id='save_button' value='Save Settings'> <form> Each of these need updated in a hidden input tag with each form post, or the values in the database table (which aren't submitted with the form) get deleted. So I am forced to insert approximately 175 hidden input tags with every creation of 26 new values for one of the 7 templates. Is there a PHP function or command that would enable me to update all these values without inserting 175 hidden input tags within each form post? Here is the create.template.php file which the form action calls: <?php $q=new Cdb; $t->set_file("content", "create_template.html"); $q2=new CDB; $query="SELECT menu_category FROM menus WHERE link='create.template.ag.php'"; $q2->query($query); $toall=0; if ($q2->nf()<1) { $toall=1; } while ($q2->next_record()) { if ($q2->f('menu_category')=="main") { $toall=1; } } if ($toall==0) { get_logged_info(); $q2=new CDB; $query="SELECT id FROM menus WHERE link='create_template.php'"; $q2->query($query); $q2->next_record(); $query="SELECT membership_id FROM menu_permissions WHERE menu_item='".$q2->f("id")."'"; $q2->query($query); while ($q2->next_record()) { $permissions[]=$q2->f("membership_id"); } if (count($permissions)>0) { $error='<center><font color="red"><b>You do not have access to this area!<br><br>Upgrade your membership level!</b></font></center>'; foreach ($permissions as $value) { if ($value==$q->f("membership_id")) { $error=''; break; } } if ($error!="") { die("$error"); } } } $member_id=$q->f("id"); $pid=$q->f("pid"); $pid_sum = $pid +1; $first_name=$q->f("first_name"); $last_name=$q->f("last_name"); $email=$q->f("email"); echo " // THIS IS WHERE THE HTML FORM GOES "; replace_tags_t($q->f("id"), $t); ?>

    Read the article

  • nagios NRPE: Unable to read output

    - by user555854
    I currently set up a script to restart my http servers + php5 fpm but can't get it to work. I have googled and have found that mostly permissions are the problems of my error but can't figure it out. I start my script using /usr/lib/nagios/plugins/check_nrpe -H bart -c restart_http This is the output in my syslog on the node I want to restart Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 port 25028 Jun 27 06:29:35 bart nrpe[8926]: Host address is in allowed_hosts Jun 27 06:29:35 bart nrpe[8926]: Handling the connection... Jun 27 06:29:35 bart nrpe[8926]: Host is asking for command 'restart_http' to be run... Jun 27 06:29:35 bart nrpe[8926]: Running command: /usr/bin/sudo /usr/lib/nagios/plugins/http-restart Jun 27 06:29:35 bart nrpe[8926]: Command completed with return code 1 and output: Jun 27 06:29:35 bart nrpe[8926]: Return Code: 1, Output: NRPE: Unable to read output Jun 27 06:29:35 bart nrpe[8926]: Connection from 192.168.133.17 closed. If I run the command myself it runs fine (but asks for a password) (nagios user) This are the script permission and the script contents. -rwxrwxrwx 1 nagios nagios 142 Jun 26 21:41 /usr/lib/nagios/plugins/http-restart #!/bin/bash echo "ok" /etc/init.d/nginx stop /etc/init.d/nginx start /etc/init.d/php5-fpm stop /etc/init.d/php5-fpm start echo "done" I also added this line to visudo nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ My local nagios nrpe.cfg ############################################################################# # Sample NRPE Config File # Written by: Ethan Galstad ([email protected]) # # # NOTES: # This is a sample configuration file for the NRPE daemon. It needs to be # located on the remote host that is running the NRPE daemon, not the host # from which the check_nrpe client is being executed. ############################################################################# # LOG FACILITY # The syslog facility that should be used for logging purposes. log_facility=daemon # PID FILE # The name of the file in which the NRPE daemon should write it's process ID # number. The file is only written if the NRPE daemon is started by the root # user and is running in standalone mode. pid_file=/var/run/nagios/nrpe.pid # PORT NUMBER # Port number we should wait for connections on. # NOTE: This must be a non-priviledged port (i.e. > 1024). # NOTE: This option is ignored if NRPE is running under either inetd or xinetd server_port=5666 # SERVER ADDRESS # Address that nrpe should bind to in case there are more than one interface # and you do not want nrpe to bind on all interfaces. # NOTE: This option is ignored if NRPE is running under either inetd or xinetd #server_address=127.0.0.1 # NRPE USER # This determines the effective user that the NRPE daemon should run as. # You can either supply a username or a UID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_user=nagios # NRPE GROUP # This determines the effective group that the NRPE daemon should run as. # You can either supply a group name or a GID. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd nrpe_group=nagios # ALLOWED HOST ADDRESSES # This is an optional comma-delimited list of IP address or hostnames # that are allowed to talk to the NRPE daemon. # # Note: The daemon only does rudimentary checking of the client's IP # address. I would highly recommend adding entries in your /etc/hosts.allow # file to allow only the specified host to connect to the port # you are running this daemon on. # # NOTE: This option is ignored if NRPE is running under either inetd or xinetd allowed_hosts=127.0.0.1,192.168.133.17 # COMMAND ARGUMENT PROCESSING # This option determines whether or not the NRPE daemon will allow clients # to specify arguments to commands that are executed. This option only works # if the daemon was configured with the --enable-command-args configure script # option. # # *** ENABLING THIS OPTION IS A SECURITY RISK! *** # Read the SECURITY file for information on some of the security implications # of enabling this variable. # # Values: 0=do not allow arguments, 1=allow command arguments dont_blame_nrpe=0 # COMMAND PREFIX # This option allows you to prefix all commands with a user-defined string. # A space is automatically added between the specified prefix string and the # command line from the command definition. # # *** THIS EXAMPLE MAY POSE A POTENTIAL SECURITY RISK, SO USE WITH CAUTION! *** # Usage scenario: # Execute restricted commmands using sudo. For this to work, you need to add # the nagios user to your /etc/sudoers. An example entry for alllowing # execution of the plugins from might be: # # nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # # This lets the nagios user run all commands in that directory (and only them) # without asking for a password. If you do this, make sure you don't give # random users write access to that directory or its contents! command_prefix=/usr/bin/sudo # DEBUGGING OPTION # This option determines whether or not debugging messages are logged to the # syslog facility. # Values: 0=debugging off, 1=debugging on debug=1 # COMMAND TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # allow plugins to finish executing before killing them off. command_timeout=60 # CONNECTION TIMEOUT # This specifies the maximum number of seconds that the NRPE daemon will # wait for a connection to be established before exiting. This is sometimes # seen where a network problem stops the SSL being established even though # all network sessions are connected. This causes the nrpe daemons to # accumulate, eating system resources. Do not set this too low. connection_timeout=300 # WEEK RANDOM SEED OPTION # This directive allows you to use SSL even if your system does not have # a /dev/random or /dev/urandom (on purpose or because the necessary patches # were not applied). The random number generator will be seeded from a file # which is either a file pointed to by the environment valiable $RANDFILE # or $HOME/.rnd. If neither exists, the pseudo random number generator will # be initialized and a warning will be issued. # Values: 0=only seed from /dev/[u]random, 1=also seed from weak randomness #allow_weak_random_seed=1 # INCLUDE CONFIG FILE # This directive allows you to include definitions from an external config file. #include=<somefile.cfg> # INCLUDE CONFIG DIRECTORY # This directive allows you to include definitions from config files (with a # .cfg extension) in one or more directories (with recursion). #include_dir=<somedirectory> #include_dir=<someotherdirectory> # COMMAND DEFINITIONS # Command definitions that this daemon will run. Definitions # are in the following format: # # command[<command_name>]=<command_line> # # When the daemon receives a request to return the results of <command_name> # it will execute the command specified by the <command_line> argument. # # Unlike Nagios, the command line cannot contain macros - it must be # typed exactly as it should be executed. # # Note: Any plugins that are used in the command lines must reside # on the machine that this daemon is running on! The examples below # assume that you have plugins installed in a /usr/local/nagios/libexec # directory. Also note that you will have to modify the definitions below # to match the argument format the plugins expect. Remember, these are # examples only! # The following examples use hardcoded command arguments... command[check_users]=/usr/lib/nagios/plugins/check_users -w 5 -c 10 command[check_load]=/usr/lib/nagios/plugins/check_load -w 15,10,5 -c 30,25,20 command[check_hda1]=/usr/lib/nagios/plugins/check_disk -w 20% -c 10% -p /dev/hda1 command[check_zombie_procs]=/usr/lib/nagios/plugins/check_procs -w 5 -c 10 -s Z command[check_total_procs]=/usr/lib/nagios/plugins/check_procs -w 150 -c 200 # The following examples allow user-supplied arguments and can # only be used if the NRPE daemon was compiled with support for # command arguments *AND* the dont_blame_nrpe directive in this # config file is set to '1'. This poses a potential security risk, so # make sure you read the SECURITY file before doing this. #command[check_users]=/usr/lib/nagios/plugins/check_users -w $ARG1$ -c $ARG2$ #command[check_load]=/usr/lib/nagios/plugins/check_load -w $ARG1$ -c $ARG2$ #command[check_disk]=/usr/lib/nagios/plugins/check_disk -w $ARG1$ -c $ARG2$ -p $ARG3$ #command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$ command[restart_http]=/usr/lib/nagios/plugins/http-restart # # local configuration: # if you'd prefer, you can instead place directives here include=/etc/nagios/nrpe_local.cfg # # you can place your config snipplets into nrpe.d/ include_dir=/etc/nagios/nrpe.d/ My Sudoers files # /etc/sudoers # # This file MUST be edited with the 'visudo' command as root. # # See the man page for details on how to write a sudoers file. # Defaults env_reset # Host alias specification # User alias specification # Cmnd alias specification # User privilege specification root ALL=(ALL) ALL nagios ALL=(ALL) NOPASSWD: /usr/lib/nagios/plugins/ # Allow members of group sudo to execute any command # (Note that later entries override this, so you might need to move # it further down) %sudo ALL=(ALL) ALL # #includedir /etc/sudoers.d Hopefully someone can help!

    Read the article

  • ESX3.5 Cluster & MD3000i -- Both servers see iSCSI Targets, Only one server can use partition.

    - by GruffTech
    Alright. First and foremost, Warning. This is a bigger-then-normal question. I like to be thorough and try to eliminate all possible "easymode" answers, as well as give everyone a feel of what i've tried. I've included several images of our setup and the problem it is having.. TLDR Version: So I've followed the guides located here: ESX Deployment Guide V1 this is the guide Dell has sent me to setup two ESX3.5 servers mounting a Dell MD3000i. It doesn't work. Both servers can't use the same storage partition on the MD3000. Both servers see it, but only one server can actually use it. (that server being whatever server created the partition on the target.) Both ESX servers are members of the Host Group. Full Version I have 2 ESX3.5 Servers (10.0.7.102, also called EPI2, and 10.0.7.103, also called EPI3.) connected to a iSCSI SAN Device (Dell MD3000i). Both ESX servers can "scan" the SAN and see the LUNS. Part One: MD3000i Storage On the MD3000i, Both servers are in my host group. I have two partitions, VM1 and VM2, both 1.6TB (vmware doesn't like anything past 2tb.) And you can even see that the ESX servers are targetting the MD3000 just fine. Part Two: The ESX Servers Figure 1. So as you can see above, Both ESX Servers (10.0.7.102 and 10.0.7.103) are able to see and scan the MD3000i SAN. Figure 2. Above is the storage both servers see. I created the storage partition on EPI2 (102). I then Extended the partition to include the second LUN for a grand total of 3.27 TB of storage. When i "rescan" on 103 (the server not mounting the partition), I get the below log in log/messages. Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). being the only line that grabs my attentions. (EPI3 is the server name) Mar 11 10:41:04 epi3 vmkiscsid[5436]: Connected to Discovery Address 192.168.130.101 Mar 11 10:41:04 epi3 vmkiscsid[5437]: Connected to Discovery Address 192.168.130.102 Mar 11 10:41:04 epi3 vmkiscsid[5438]: Connected to Discovery Address 192.168.131.101 Mar 11 10:41:04 epi3 vmkiscsid[5439]: Connected to Discovery Address 192.168.131.102 Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 0 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdb : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Device id info for sdb: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0xa2 0x0 0x0 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:17 epi3 kernel: VMWARE SCSI Id: Id for sdb 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0xa2 0x00 0x00 0x15 0xe2 0x4d 0x75 0xf6 0x99 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:17 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: Attached scsi disk sdb at scsi2, channel 0, id 0, lun 0 Mar 11 10:41:17 epi3 kernel: scan_scsis starting finish Mar 11 10:41:17 epi3 kernel: SCSI device sdb: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:17 epi3 kernel: sdb: sdb1 Mar 11 10:41:17 epi3 kernel: scan_scsis done with finish Mar 11 10:41:17 epi3 kernel: scsi singledevice 2 0 0 1 Mar 11 10:41:17 epi3 kernel: Vendor: DELL Model: MD3000i Rev: 0735 Mar 11 10:41:17 epi3 kernel: Type: Direct-Access ANSI SCSI revision: 05 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Supported VPD pages for sdc : 0x0 0x80 0x83 0x85 0x86 0x87 0xc0 0xc1 0xc2 0xc3 0xc4 0xc8 0xc9 0xca 0xd0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Device id info for sdc: 0x1 0x3 0x0 0x10 0x60 0x1 0xe4 0xf0 0x0 0x1a 0x1a 0x86 0x0 0x0 0xd 0xb7 0x4d 0x75 0xf2 0x77 0x53 0x98 0x0 0x54 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x2c 0x74 0x2c 0x30 0x78 0x30 0x30 0x30 0x31 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x32 0x0 0x0 0x0 0x51 0x94 0x0 0x4 0x0 0x0 0x80 0x1 0x53 0xa8 0x0 0x44 0x69 0x71 0x6e 0x2e 0x31 0x39 0x38 0x34 0x2d 0x30 0x35 0x2e 0x63 0x6f 0x6d 0x2e 0x64 0x65 0x6c 0x6c 0x3a 0x70 0x6f 0x77 0x65 0x72 0x76 0x61 0x75 0x6c 0x74 0x2e 0x36 0x30 0x30 0x31 0x65 0x34 0x66 0x30 0x30 0x30 0x31 0x61 0x31 0x61 0x61 0x32 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x30 0x34 0x37 0x39 0x30 0x36 0x32 0x32 0x65 0x0 0x0 0x0 0x0 Mar 11 10:41:18 epi3 kernel: VMWARE SCSI Id: Id for sdc 0x60 0x01 0xe4 0xf0 0x00 0x1a 0x1a 0x86 0x00 0x00 0x0d 0xb7 0x4d 0x75 0xf2 0x77 0x4d 0x44 0x33 0x30 0x30 0x30 Mar 11 10:41:18 epi3 kernel: VMWARE: Unique Device attached as scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: Attached scsi disk sdc at scsi2, channel 0, id 0, lun 1 Mar 11 10:41:18 epi3 kernel: scan_scsis starting finish Mar 11 10:41:18 epi3 kernel: SCSI device sdc: 3509329920 512-byte hdwr sectors (1797751 MB) Mar 11 10:41:18 epi3 kernel: sdc: sdc1 Mar 11 10:41:18 epi3 kernel: scan_scsis done with finish Mar 11 10:41:18 epi3 kernel: scsi1: remove-single-device 0 0 0 failed, device busy(4). Mar 11 10:41:18 epi3 kernel: scsi singledevice 1 0 0 0 Things I've Tried: Removing iSCSI targets from only 103, disabling iSCSI, rebooting, enabled iSCSI, re-adding targets, rescan. Same result. Removing partition on 102, Formatted partition on 103 instead. Same result, except flipped. 103 can use storage, 102 can not. Starting Over. Removing all iSCSI Targets on both ESX Boxes, disabling iSCSI, turning off the firewall for iSCSI, rebooting ESX. Then on the MD3000, Removed the Host Group, Removed the Host-to-Virtual Mappings, Restarted the SAN. Followed the Documentation again, same result. Both servers see the storage, but only one server can use it. Disabling and Re-enabling VMware DRS and HA. Same result. Flat-out turning off VMware DRS and HA, and doing the "start over" step to see if maybe that borked it. Same Result. I'm kinda loosing my mind here, Everything i read online says "just partition it and if the ESX boxes can see the targets, it just works".... well crap. Any ideas, any other things to try? Can anyone atleast point me in the right direction? I'm really tired of working from 1am til 4am (our maintenance hours)

    Read the article

  • Stop duplicate icmp echo replies when bridging to a dummy interface?

    - by mbrownnyc
    I recently configured a bridge br0 with members as eth0 (real if) and dummy0 (dummy.ko if). When I ping this machine, I receive duplicate replies as: # ping SERVERA PING SERVERA.domain.local (192.168.100.115) 56(84) bytes of data. 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=1 ttl=62 time=114 ms (DUP!) 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms 64 bytes from SERVERA.domain.local (192.168.100.115): icmp_seq=2 ttl=62 time=113 ms (DUP!) Using tcpdump on SERVERA, I was able to see icmp echo replies being sent from eth0 and br0 itself as follows (oddly two echo request packets arrive "from" my Windows box myhost): 23:19:05.324192 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324212 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324217 IP myhost.domain.local > SERVERA.domain.local: ICMP echo request, id 512, seq 43781, length 40 23:19:05.324221 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324264 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 23:19:05.324272 IP SERVERA.domain.local > myhost.domain.local: ICMP echo reply, id 512, seq 43781, length 40 It's worth noting, testing reveals that hosts on the same physical switch do not see DUP icmp echo responses (a host on the same VLAN on another switch does see a dup icmp echo response). I've read that this could be due to the ARP table of a switch, but I can't find any info directly related to bridges, just bonds. I have a feeling my problem lay in the stack on linux, not the switch, but am opened to any suggestions. The system is running centos6/el6 kernel 2.6.32-71.29.1.el6.i686. How do I stop ICMP echo replies from being sent in duplicate when dealing with a bridge interface/bridged interfaces? Thanks, Matt [edit] Quick note: It was recommended in #linux to: [08:53] == mbrownnyc [gateway/web/freenode/] has joined ##linux [08:57] <lkeijser> mbrownnyc: what happens if you set arp_ignore to 1 for the dummy interface? [08:59] <lkeijser> also set arp_announce to 2 for that interface [09:24] <mbrownnyc> lkeijser: I set arp_annouce to 2, arp_ignore to 2 in /etc/sysctl.conf and rebooted the machine... verifying that the bits are set after boot... the problem is still present I did this and came up empty. Same dup problem. I will be moving away from including the dummy interface in the bridge as: [09:31] == mbrownnyc [gateway/web/freenode/] has joined #Netfilter [09:31] <mbrownnyc> Hello all... I'm wondering, is it correct that even with an interface in PROMISC that the kernel will drop /some/ packets before they reach applications? [09:31] <whaffle> What would you make think so? [09:32] <mbrownnyc> I ask because I am receiving ICMP echo replies after configuring a bridge with a dummy interface in order for ipt_netflow to see all packets, only as reported in it's documentation: http://ipt-netflow.git.sourceforge.net/git/gitweb.cgi?p=ipt-netflow/ipt-netflow;a=blob;f=README.promisc [09:32] <mbrownnyc> but I do not know if PROMISC will do the same job [09:33] <mbrownnyc> I was referred here from #linux. any assistance is appreciated [09:33] <whaffle> The following conditions need to be met: PROMISC is enabled (bridges and applications like tcpdump will do this automatically, otherwise they won't function). [09:34] <whaffle> If an interface is part of a bridge, then all packets that enter the bridge should already be visible in the raw table. [09:35] <mbrownnyc> thanks whaffle PROMISC must be set manually for ipt_netflow to function, but [09:36] <whaffle> promisc does not need to be set manually, because the bridge will do it for you. [09:36] <whaffle> When you do not have a bridge, you can easily create one, thereby rendering any kernel patches moot. [09:36] <mbrownnyc> whaffle: I speak without the bridge [09:36] <whaffle> It is perfectly valid to have a "half-bridge" with only a single interface in it. [09:36] <mbrownnyc> whaffle: I am unfamiliar with the raw table, does this mean that PROMISC allows the raw table to be populated with packets the same as if the interface was part of a bridge? [09:37] <whaffle> Promisc mode will cause packets with {a dst MAC address that does not equal the interface's MAC address} to be delivered from the NIC into the kernel nevertheless. [09:37] <mbrownnyc> whaffle: I suppose I mean to clearly ask: what benefit would creating a bridge have over setting an interface PROMISC? [09:38] <mbrownnyc> whaffle: from your last answer I feel that the answer to my question is "none," is this correct? [09:39] <whaffle> Furthermore, the linux kernel itself has a check for {packets with a non-local MAC address}, so that packets that will not enter a bridge will be discarded as well, even in the face of PROMISC. [09:46] <mbrownnyc> whaffle: so, this last bit of information is quite clearly why I would need and want a bridge in my situation [09:46] <mbrownnyc> okay, the ICMP echo reply duplicate issue is likely out of the realm of this channel, but I sincerely appreciate the info on the kernels inner-workings [09:52] <whaffle> mbrownnyc: either the kernel patch, or a bridge with an interface. Since the latter is quicker, yes [09:54] <mbrownnyc> thanks whaffle [edit2] After removing the bridge, and removing the dummy kernel module, I only had a single interface chilling out, lonely. I still received duplicate icmp echo replies... in fact I received a random amount: http://pastebin.com/2LNs0GM8 The same thing doesn't happen on a few other hosts on the same switch, so it has to do with the linux box itself. I'll likely end up rebuilding it next week. Then... you know... this same thing will occur again. [edit3] Guess what? I rebuilt the box, and I'm still receiving duplicate ICMP echo replies. Must be the network infrastructure, although the ARP tables do not contain multiple entries. [edit4] How ridiculous. The machine was a network probe, so I was (ingress and egress) mirroring an uplink port to a node that was the NIC. So, the flow (must have) gone like this: ICMP echo request comes in through the mirrored uplink port. (the real) ICMP echo request is received by the NIC (the mirrored) ICMP echo request is received by the NIC ICMP echo reply is sent for both. I'm ashamed of myself, but now I know. It was suggested on #networking to either isolate the mirrored traffic to an interface that does not have IP enabled, or tag the mirrored packets with dot1q.

    Read the article

  • Microsoft and jQuery

    - by Rick Strahl
    The jQuery JavaScript library has been steadily getting more popular and with recent developments from Microsoft, jQuery is also getting ever more exposure on the ASP.NET platform including now directly from Microsoft. jQuery is a light weight, open source DOM manipulation library for JavaScript that has changed how many developers think about JavaScript. You can download it and find more information on jQuery on www.jquery.com. For me jQuery has had a huge impact on how I develop Web applications and was probably the main reason I went from dreading to do JavaScript development to actually looking forward to implementing client side JavaScript functionality. It has also had a profound impact on my JavaScript skill level for me by seeing how the library accomplishes things (and often reviewing the terse but excellent source code). jQuery made an uncomfortable development platform (JavaScript + DOM) a joy to work on. Although jQuery is by no means the only JavaScript library out there, its ease of use, small size, huge community of plug-ins and pure usefulness has made it easily the most popular JavaScript library available today. As a long time jQuery user, I’ve been excited to see the developments from Microsoft that are bringing jQuery to more ASP.NET developers and providing more integration with jQuery for ASP.NET’s core features rather than relying on the ASP.NET AJAX library. Microsoft and jQuery – making Friends jQuery is an open source project but in the last couple of years Microsoft has really thrown its weight behind supporting this open source library as a supported component on the Microsoft platform. When I say supported I literally mean supported: Microsoft now offers actual tech support for jQuery as part of their Product Support Services (PSS) as jQuery integration has become part of several of the ASP.NET toolkits and ships in several of the default Web project templates in Visual Studio 2010. The ASP.NET MVC 3 framework (still in Beta) also uses jQuery for a variety of client side support features including client side validation and we can look forward toward more integration of client side functionality via jQuery in both MVC and WebForms in the future. In other words jQuery is becoming an optional but included component of the ASP.NET platform. PSS support means that support staff will answer jQuery related support questions as part of any support incidents related to ASP.NET which provides some piece of mind to some corporate development shops that require end to end support from Microsoft. In addition to including jQuery and supporting it, Microsoft has also been getting involved in providing development resources for extending jQuery’s functionality via plug-ins. Microsoft’s last version of the Microsoft Ajax Library – which is the successor to the native ASP.NET AJAX Library – included some really cool functionality for client templates, databinding and localization. As it turns out Microsoft has rebuilt most of that functionality using jQuery as the base API and provided jQuery plug-ins of these components. Very recently these three plug-ins were submitted and have been approved for inclusion in the official jQuery plug-in repository and been taken over by the jQuery team for further improvements and maintenance. Even more surprising: The jQuery-templates component has actually been approved for inclusion in the next major update of the jQuery core in jQuery V1.5, which means it will become a native feature that doesn’t require additional script files to be loaded. Imagine this – an open source contribution from Microsoft that has been accepted into a major open source project for a core feature improvement. Microsoft has come a long way indeed! What the Microsoft Involvement with jQuery means to you For Microsoft jQuery support is a strategic decision that affects their direction in client side development, but nothing stopped you from using jQuery in your applications prior to Microsoft’s official backing and in fact a large chunk of developers did so readily prior to Microsoft’s announcement. Official support from Microsoft brings a few benefits to developers however. jQuery support in Visual Studio 2010 means built-in support for jQuery IntelliSense, automatically added jQuery scripts in many projects types and a common base for client side functionality that actually uses what most developers are already using. If you have already been using jQuery and were worried about straying from the Microsoft line and their internal Microsoft Ajax Library – worry no more. With official support and the change in direction towards jQuery Microsoft is now following along what most in the ASP.NET community had already been doing by using jQuery, which is likely the reason for Microsoft’s shift in direction in the first place. ASP.NET AJAX and the Microsoft AJAX Library weren’t bad technology – there was tons of useful functionality buried in these libraries. However, these libraries never got off the ground, mainly because early incarnations were squarely aimed at control/component developers rather than application developers. For all the functionality that these controls provided for control developers they lacked in useful and easily usable application developer functionality that was easily accessible in day to day client side development. The result was that even though Microsoft shipped support for these tools in the box (in .NET 3.5 and 4.0), other than for the internal support in ASP.NET for things like the UpdatePanel and the ASP.NET AJAX Control Toolkit as well as some third party vendors, the Microsoft client libraries were largely ignored by the developer community opening the door for other client side solutions. Microsoft seems to be acknowledging developer choice in this case: Many more developers were going down the jQuery path rather than using the Microsoft built libraries and there seems to be little sense in continuing development of a technology that largely goes unused by the majority of developers. Kudos for Microsoft for recognizing this and gracefully changing directions. Note that even though there will be no further development in the Microsoft client libraries they will continue to be supported so if you’re using them in your applications there’s no reason to start running for the exit in a panic and start re-writing everything with jQuery. Although that might be a reasonable choice in some cases, jQuery and the Microsoft libraries work well side by side so that you can leave existing solutions untouched even as you enhance them with jQuery. The Microsoft jQuery Plug-ins – Solid Core Features One of the most interesting developments in Microsoft’s embracing of jQuery is that Microsoft has started contributing to jQuery via standard mechanism set for jQuery developers: By submitting plug-ins. Microsoft took some of the nicest new features of the unpublished Microsoft Ajax Client Library and re-wrote these components for jQuery and then submitted them as plug-ins to the jQuery plug-in repository. Accepted plug-ins get taken over by the jQuery team and that’s exactly what happened with the three plug-ins submitted by Microsoft with the templating plug-in even getting slated to be published as part of the jQuery core in the next major release (1.5). The following plug-ins are provided by Microsoft: jQuery Templates – a client side template rendering engine jQuery Data Link – a client side databinder that can synchronize changes without code jQuery Globalization – provides formatting and conversion features for dates and numbers The first two are ports of functionality that was slated for the Microsoft Ajax Library while functionality for the globalization library provides functionality that was already found in the original ASP.NET AJAX library. To me all three plug-ins address a pressing need in client side applications and provide functionality I’ve previously used in other incarnations, but with more complete implementations. Let’s take a close look at these plug-ins. jQuery Templates http://api.jquery.com/category/plugins/templates/ Client side templating is a key component for building rich JavaScript applications in the browser. Templating on the client lets you avoid from manually creating markup by creating DOM nodes and injecting them individually into the document via code. Rather you can create markup templates – similar to the way you create classic ASP server markup – and merge data into these templates to render HTML which you can then inject into the document or replace existing content with. Output from templates are rendered as a jQuery matched set and can then be easily inserted into the document as needed. Templating is key to minimize client side code and reduce repeated code for rendering logic. Instead a single template can be used in many places for updating and adding content to existing pages. Further if you build pure AJAX interfaces that rely entirely on client rendering of the initial page content, templates allow you to a use a single markup template to handle all rendering of each specific HTML section/element. I’ve used a number of different client rendering template engines with jQuery in the past including jTemplates (a PHP style templating engine) and a modified version of John Resig’s MicroTemplating engine which I built into my own set of libraries because it’s such a commonly used feature in my client side applications. jQuery templates adds a much richer templating model that allows for sub-templates and access to the data items. Like John Resig’s original Micro Template engine, the core basics of the templating engine create JavaScript code which means that templates can include JavaScript code. To give you a basic idea of how templates work imagine I have an application that downloads a set of stock quotes based on a symbol list then displays them in the document. To do this you can create an ‘item’ template that describes how each of the quotes is renderd as a template inside of the document: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div><div>${LastPrice}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div><div>${LastQuoteTimeString}</div> </div> </script> The ‘template’ is little more than HTML with some markup expressions inside of it that define the template language. Notice the embedded ${} expressions which reference data from the quote objects returned from an AJAX call on the server. You can embed any JavaScript or value expression in these template expressions. There are also a number of structural commands like {{if}} and {{each}} that provide for rudimentary logic inside of your templates as well as commands ({{tmpl}} and {{wrap}}) for nesting templates. You can find more about the full set of markup expressions available in the documentation. To load up this data you can use code like the following: <script type="text/javascript"> //var Proxy = new ServiceProxy("../PageMethods/PageMethodsService.asmx/"); $(document).ready(function () { $("#btnGetQuotes").click(GetQuotes); }); function GetQuotes() { var symbols = $("#txtSymbols").val().split(","); $.ajax({ url: "../PageMethods/PageMethodsService.asmx/GetStockQuotes", data: JSON.stringify({ symbols: symbols }), // parameter map type: "POST", // data has to be POSTed contentType: "application/json", timeout: 10000, dataType: "json", success: function (result) { var quotes = result.d; var jEl = $("#stockTemplate").tmpl(quotes); $("#quoteDisplay").empty().append(jEl); }, error: function (xhr, status) { alert(status + "\r\n" + xhr.responseText); } }); }; </script> In this case an ASMX AJAX service is called to retrieve the stock quotes. The service returns an array of quote objects. The result is returned as an object with the .d property (in Microsoft service style) that returns the actual array of quotes. The template is applied with: var jEl = $("#stockTemplate").tmpl(quotes); which selects the template script tag and uses the .tmpl() function to apply the data to it. The result is a jQuery matched set of elements that can then be appended to the quote display element in the page. The template is merged against an array in this example. When the result is an array the template is automatically applied to each each array item. If you pass a single data item – like say a stock quote – the template works exactly the same way but is applied only once. Templates also have access to a $data item which provides the current data item and information about the tempalte that is currently executing. This makes it possible to keep context within the context of the template itself and also to pass context from a parent template to a child template which is very powerful. Templates can be evaluated by using the template selector and calling the .tmpl() function on the jQuery matched set as shown above or you can use the static $.tmpl() function to provide a template as a string. This allows you to dynamically create templates in code or – more likely – to load templates from the server via AJAX calls. In short there are options The above shows off some of the basics, but there’s much for functionality available in the template engine. Check the documentation link for more information and links to additional examples. The plug-in download also comes with a number of examples that demonstrate functionality. jQuery templates will become a native component in jQuery Core 1.5, so it’s definitely worthwhile checking out the engine today and get familiar with this interface. As much as I’m stoked about templating becoming part of the jQuery core because it’s such an integral part of many applications, there are also a couple shortcomings in the current incarnation: Lack of Error Handling Currently if you embed an expression that is invalid it’s simply not rendered. There’s no error rendered into the template nor do the various  template functions throw errors which leaves finding of bugs as a runtime exercise. I would like some mechanism – optional if possible – to be able to get error info of what is failing in a template when it’s rendered. No String Output Templates are always rendered into a jQuery matched set and there’s no way that I can see to directly render to a string. String output can be useful for debugging as well as opening up templating for creating non-HTML string output. Limited JavaScript Access Unlike John Resig’s original MicroTemplating Engine which was entirely based on JavaScript code generation these templates are limited to a few structured commands that can ‘execute’. There’s no code execution inside of script code which means you’re limited to calling expressions available in global objects or the data item passed in. This may or may not be a big deal depending on the complexity of your template logic. Error handling has been discussed quite a bit and it’s likely there will be some solution to that particualar issue by the time jQuery templates ship. The others are relatively minor issues but something to think about anyway. jQuery Data Link http://api.jquery.com/category/plugins/data-link/ jQuery Data Link provides the ability to do two-way data binding between input controls and an underlying object’s properties. The typical scenario is linking a textbox to a property of an object and have the object updated when the text in the textbox is changed and have the textbox change when the value in the object or the entire object changes. The plug-in also supports converter functions that can be applied to provide the conversion logic from string to some other value typically necessary for mapping things like textbox string input to say a number property and potentially applying additional formatting and calculations. In theory this sounds great, however in reality this plug-in has some serious usability issues. Using the plug-in you can do things like the following to bind data: person = { firstName: "rick", lastName: "strahl"}; $(document).ready( function() { // provide for two-way linking of inputs $("form").link(person); // bind to non-input elements explicitly $("#objFirst").link(person, { firstName: { name: "objFirst", convertBack: function (value, source, target) { $(target).text(value); } } }); $("#objLast").link(person, { lastName: { name: "objLast", convertBack: function (value, source, target) { $(target).text(value); } } }); }); This code hooks up two-way linking between a couple of textboxes on the page and the person object. The first line in the .ready() handler provides mapping of object to form field with the same field names as properties on the object. Note that .link() does NOT bind items into the textboxes when you call .link() – changes are mapped only when values change and you move out of the field. Strike one. The two following commands allow manual binding of values to specific DOM elements which is effectively a one-way bind. You specify the object and a then an explicit mapping where name is an ID in the document. The converter is required to explicitly assign the value to the element. Strike two. You can also detect changes to the underlying object and cause updates to the input elements bound. Unfortunately the syntax to do this is not very natural as you have to rely on the jQuery data object. To update an object’s properties and get change notification looks like this: function updateFirstName() { $(person).data("firstName", person.firstName + " (code updated)"); } This works fine in causing any linked fields to be updated. In the bindings above both the firstName input field and objFirst DOM element gets updated. But the syntax requires you to use a jQuery .data() call for each property change to ensure that the changes are tracked properly. Really? Sure you’re binding through multiple layers of abstraction now but how is that better than just manually assigning values? The code savings (if any) are going to be minimal. As much as I would like to have a WPF/Silverlight/Observable-like binding mechanism in client script, this plug-in doesn’t help much towards that goal in its current incarnation. While you can bind values, the ‘binder’ is too limited to be really useful. If initial values can’t be assigned from the mappings you’re going to end up duplicating work loading the data using some other mechanism. There’s no easy way to re-bind data with a different object altogether since updates trigger only through the .data members. Finally, any non-input elements have to be bound via code that’s fairly verbose and frankly may be more voluminous than what you might write by hand for manual binding and unbinding. Two way binding can be very useful but it has to be easy and most importantly natural. If it’s more work to hook up a binding than writing a couple of lines to do binding/unbinding this sort of thing helps very little in most scenarios. In talking to some of the developers the feature set for Data Link is not complete and they are still soliciting input for features and functionality. If you have ideas on how you want this feature to be more useful get involved and post your recommendations. As it stands, it looks to me like this component needs a lot of love to become useful. For this component to really provide value, bindings need to be able to be refreshed easily and work at the object level, not just the property level. It seems to me we would be much better served by a model binder object that can perform these binding/unbinding tasks in bulk rather than a tool where each link has to be mapped first. I also find the choice of creating a jQuery plug-in questionable – it seems a standalone object – albeit one that relies on the jQuery library – would provide a more intuitive interface than the current forcing of options onto a plug-in style interface. Out of the three Microsoft created components this is by far the least useful and least polished implementation at this point. jQuery Globalization http://github.com/jquery/jquery-global Globalization in JavaScript applications often gets short shrift and part of the reason for this is that natively in JavaScript there’s little support for formatting and parsing of numbers and dates. There are a number of JavaScript libraries out there that provide some support for globalization, but most are limited to a particular portion of globalization. As .NET developers we’re fairly spoiled by the richness of APIs provided in the framework and when dealing with client development one really notices the lack of these features. While you may not necessarily need to localize your application the globalization plug-in also helps with some basic tasks for non-localized applications: Dealing with formatting and parsing of dates and time values. Dates in particular are problematic in JavaScript as there are no formatters whatsoever except the .toString() method which outputs a verbose and next to useless long string. With the globalization plug-in you get a good chunk of the formatting and parsing functionality that the .NET framework provides on the server. You can write code like the following for example to format numbers and dates: var date = new Date(); var output = $.format(date, "MMM. dd, yy") + "\r\n" + $.format(date, "d") + "\r\n" + // 10/25/2010 $.format(1222.32213, "N2") + "\r\n" + $.format(1222.33, "c") + "\r\n"; alert(output); This becomes even more useful if you combine it with templates which can also include any JavaScript expressions. Assuming the globalization plug-in is loaded you can create template expressions that use the $.format function. Here’s the template I used earlier for the stock quote again with a couple of formats applied: <script id="stockTemplate" type="text/x-jquery-tmpl"> <div id="divStockQuote" class="errordisplay" style="width: 500px;"> <div class="label">Company:</div><div><b>${Company}(${Symbol})</b></div> <div class="label">Last Price:</div> <div>${$.format(LastPrice,"N2")}</div> <div class="label">Net Change:</div><div> {{if NetChange > 0}} <b style="color:green" >${NetChange}</b> {{else}} <b style="color:red" >${NetChange}</b> {{/if}} </div> <div class="label">Last Update:</div> <div>${$.format(LastQuoteTime,"MMM dd, yyyy")}</div> </div> </script> There are also parsing methods that can parse dates and numbers from strings into numbers easily: alert($.parseDate("25.10.2010")); alert($.parseInt("12.222")); // de-DE uses . for thousands separators As you can see culture specific options are taken into account when parsing. The globalization plugin provides rich support for a variety of locales: Get a list of all available cultures Query cultures for culture items (like currency symbol, separators etc.) Localized string names for all calendar related items (days of week, months) Generated off of .NET’s supported locales In short you get much of the same functionality that you already might be using in .NET on the server side. The plugin includes a huge number of locales and an Globalization.all.min.js file that contains the text defaults for each of these locales as well as small locale specific script files that define each of the locale specific settings. It’s highly recommended that you NOT use the huge globalization file that includes all locales, but rather add script references to only those languages you explicitly care about. Overall this plug-in is a welcome helper. Even if you use it with a single locale (like en-US) and do no other localization, you’ll gain solid support for number and date formatting which is a vital feature of many applications. Changes for Microsoft It’s good to see Microsoft coming out of its shell and away from the ‘not-built-here’ mentality that has been so pervasive in the past. It’s especially good to see it applied to jQuery – a technology that has stood in drastic contrast to Microsoft’s own internal efforts in terms of design, usage model and… popularity. It’s great to see that Microsoft is paying attention to what customers prefer to use and supporting the customer sentiment – even if it meant drastically changing course of policy and moving into a more open and sharing environment in the process. The additional jQuery support that has been introduced in the last two years certainly has made lives easier for many developers on the ASP.NET platform. It’s also nice to see Microsoft submitting proposals through the standard jQuery process of plug-ins and getting accepted for various very useful projects. Certainly the jQuery Templates plug-in is going to be very useful to many especially since it will be baked into the jQuery core in jQuery 1.5. I hope we see more of this type of involvement from Microsoft in the future. Kudos!© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  

    Read the article

  • Guidance: A Branching strategy for Scrum Teams

    - by Martin Hinshelwood
    Having a good branching strategy will save your bacon, or at least your code. Be careful when deviating from your branching strategy because if you do, you may be worse off than when you started! This is one possible branching strategy for Scrum teams and I will not be going in depth with Scrum but you can find out more about Scrum by reading the Scrum Guide and you can even assess your Scrum knowledge by having a go at the Scrum Open Assessment. You can also read SSW’s Rules to Better Scrum using TFS which have been developed during our own Scrum implementations. Acknowledgements Bill Heys – Bill offered some good feedback on this post and helped soften the language. Note: Bill is a VS ALM Ranger and co-wrote the Branching Guidance for TFS 2010 Willy-Peter Schaub – Willy-Peter is an ex Visual Studio ALM MVP turned blue badge and has been involved in most of the guidance including the Branching Guidance for TFS 2010 Chris Birmele – Chris wrote some of the early TFS Branching and Merging Guidance. Dr Paul Neumeyer, Ph.D Parallel Processes, ScrumMaster and SSW Solution Architect – Paul wanted to have feature branches coming from the release branch as well. We agreed that this is really a spin-off that needs own project, backlog, budget and Team. Scenario: A product is developed RTM 1.0 is released and gets great sales.  Extra features are demanded but the new version will have double to price to pay to recover costs, work is approved by the guys with budget and a few sprints later RTM 2.0 is released.  Sales a very low due to the pricing strategy. There are lots of clients on RTM 1.0 calling out for patches. As I keep getting Reverse Integration and Forward Integration mixed up and Bill keeps slapping my wrists I thought I should have a reminder: You still seemed to use reverse and/or forward integration in the wrong context. I would recommend reviewing your document at the end to ensure that it agrees with the common understanding of these terms merge (forward integration) from parent to child (same direction as the branch), and merge  (reverse integration) from child to parent (the reverse direction of the branch). - one of my many slaps on the wrist from Bill Heys.   As I mentioned previously we are using a single feature branching strategy in our current project. The single biggest mistake developers make is developing against the “Main” or “Trunk” line. This ultimately leads to messy code as things are added and never finished. Your only alternative is to NEVER check in unless your code is 100%, but this does not work in practice, even with a single developer. Your ADD will kick in and your half-finished code will be finished enough to pass the build and the tests. You do use builds don’t you? Sadly, this is a very common scenario and I have had people argue that branching merely adds complexity. Then again I have seen the other side of the universe ... branching  structures from he... We should somehow convince everyone that there is a happy between no-branching and too-much-branching. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   A key benefit of branching for development is to isolate changes from the stable Main branch. Branching adds sanity more than it adds complexity. We do try to stress in our guidance that it is important to justify a branch, by doing a cost benefit analysis. The primary cost is the effort to do merges and resolve conflicts. A key benefit is that you have a stable code base in Main and accept changes into Main only after they pass quality gates, etc. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft The second biggest mistake developers make is branching anything other than the WHOLE “Main” line. If you branch parts of your code and not others it gets out of sync and can make integration a nightmare. You should have your Source, Assets, Build scripts deployment scripts and dependencies inside the “Main” folder and branch the whole thing. Some departments within MSFT even go as far as to add the environments used to develop the product in there as well; although I would not recommend that unless you have a massive SQL cluster to house your source code. We tried the “add environment” back in South-Africa and while it was “phenomenal”, especially when having to switch between environments, the disk storage and processing requirements killed us. We opted for virtualization to skin this cat of keeping a ready-to-go environment handy. - Willy-Peter Schaub, VS ALM Ranger, Microsoft   I think people often think that you should have separate branches for separate environments (e.g. Dev, Test, Integration Test, QA, etc.). I prefer to think of deploying to environments (such as from Main to QA) rather than branching for QA). - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   You can read about SSW’s Rules to better Source Control for some additional information on what Source Control to use and how to use it. There are also a number of branching Anti-Patterns that should be avoided at all costs: You know you are on the wrong track if you experience one or more of the following symptoms in your development environment: Merge Paranoia—avoiding merging at all cost, usually because of a fear of the consequences. Merge Mania—spending too much time merging software assets instead of developing them. Big Bang Merge—deferring branch merging to the end of the development effort and attempting to merge all branches simultaneously. Never-Ending Merge—continuous merging activity because there is always more to merge. Wrong-Way Merge—merging a software asset version with an earlier version. Branch Mania—creating many branches for no apparent reason. Cascading Branches—branching but never merging back to the main line. Mysterious Branches—branching for no apparent reason. Temporary Branches—branching for changing reasons, so the branch becomes a permanent temporary workspace. Volatile Branches—branching with unstable software assets shared by other branches or merged into another branch. Note   Branches are volatile most of the time while they exist as independent branches. That is the point of having them. The difference is that you should not share or merge branches while they are in an unstable state. Development Freeze—stopping all development activities while branching, merging, and building new base lines. Berlin Wall—using branches to divide the development team members, instead of dividing the work they are performing. -Branching and Merging Primer by Chris Birmele - Developer Tools Technical Specialist at Microsoft Pty Ltd in Australia   In fact, this can result in a merge exercise no-one wants to be involved in, merging hundreds of thousands of change sets and trying to get a consolidated build. Again, we need to find a happy medium. - Willy-Peter Schaub on Merge Paranoia Merge conflicts are generally the result of making changes to the same file in both the target and source branch. If you create merge conflicts, you will eventually need to resolve them. Often the resolution is manual. Merging more frequently allows you to resolve these conflicts close to when they happen, making the resolution clearer. Waiting weeks or months to resolve them, the Big Bang approach, means you are more likely to resolve conflicts incorrectly. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Main line, this is where your stable code lives and where any build has known entities, always passes and has a happy test that passes as well? Many development projects consist of, a single “Main” line of source and artifacts. This is good; at least there is source control . There are however a couple of issues that need to be considered. What happens if: you and your team are working on a new set of features and the customer wants a change to his current version? you are working on two features and the customer decides to abandon one of them? you have two teams working on different feature sets and their changes start interfering with each other? I just use labels instead of branches? That's a lot of “what if’s”, but there is a simple way of preventing this. Branching… In TFS, labels are not immutable. This does not mean they are not useful. But labels do not provide a very good development isolation mechanism. Branching allows separate code sets to evolve separately (e.g. Current with hotfixes, and vNext with new development). I don’t see how labels work here. - Bill Heys, VS ALM Ranger & TFS Branching Lead, Microsoft   Figure: Creating a single feature branch means you can isolate the development work on that branch.   Its standard practice for large projects with lots of developers to use Feature branching and you can check the Branching Guidance for the latest recommendations from the Visual Studio ALM Rangers for other methods. In the diagram above you can see my recommendation for branching when using Scrum development with TFS 2010. It consists of a single Sprint branch to contain all the changes for the current sprint. The main branch has the permissions changes so contributors to the project can only Branch and Merge with “Main”. This will prevent accidental check-ins or checkouts of the “Main” line that would contaminate the code. The developers continue to develop on sprint one until the completion of the sprint. Note: In the real world, starting a new Greenfield project, this process starts at Sprint 2 as at the start of Sprint 1 you would have artifacts in version control and no need for isolation.   Figure: Once the sprint is complete the Sprint 1 code can then be merged back into the Main line. There are always good practices to follow, and one is to always do a Forward Integration from Main into Sprint 1 before you do a Reverse Integration from Sprint 1 back into Main. In this case it may seem superfluous, but this builds good muscle memory into your developer’s work ethic and means that no bad habits are learned that would interfere with additional Scrum Teams being added to the Product. The process of completing your sprint development: The Team completes their work according to their definition of done. Merge from “Main” into “Sprint1” (Forward Integration) Stabilize your code with any changes coming from other Scrum Teams working on the same product. If you have one Scrum Team this should be quick, but there may have been bug fixes in the Release branches. (we will talk about release branches later) Merge from “Sprint1” into “Main” to commit your changes. (Reverse Integration) Check-in Delete the Sprint1 branch Note: The Sprint 1 branch is no longer required as its useful life has been concluded. Check-in Done But you are not yet done with the Sprint. The goal in Scrum is to have a “potentially shippable product” at the end of every Sprint, and we do not have that yet, we only have finished code.   Figure: With Sprint 1 merged you can create a Release branch and run your final packaging and testing In 99% of all projects I have been involved in or watched, a “shippable product” only happens towards the end of the overall lifecycle, especially when sprints are short. The in-between releases are great demonstration releases, but not shippable. Perhaps it comes from my 80’s brain washing that we only ship when we reach the agreed quality and business feature bar. - Willy-Peter Schaub, VS ALM Ranger, Microsoft Although you should have been testing and packaging your code all the way through your Sprint 1 development, preferably using an automated process, you still need to test and package with stable unchanging code. This is where you do what at SSW we call a “Test Please”. This is first an internal test of the product to make sure it meets the needs of the customer and you generally use a resource external to your Team. Then a “Test Please” is conducted with the Product Owner to make sure he is happy with the output. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: If you find a deviation from the expected result you fix it on the Release branch. If during your final testing or your “Test Please” you find there are issues or bugs then you should fix them on the release branch. If you can’t fix them within the time box of your Sprint, then you will need to create a Bug and put it onto the backlog for prioritization by the Product owner. Make sure you leave plenty of time between your merge from the development branch to find and fix any problems that are uncovered. This process is commonly called Stabilization and should always be conducted once you have completed all of your User Stories and integrated all of your branches. Even once you have stabilized and released, you should not delete the release branch as you would with the Sprint branch. It has a usefulness for servicing that may extend well beyond the limited life you expect of it. Note: Don't get forced by the business into adding features into a Release branch instead that indicates the unspoken requirement is that they are asking for a product spin-off. In this case you can create a new Team Project and branch from the required Release branch to create a new Main branch for that product. And you create a whole new backlog to work from.   Figure: When the Team decides it is happy with the product you can create a RTM branch. Once you have fixed all the bugs you can, and added any you can’t to the Product Backlog, and you Team is happy with the result you can create a Release. This would consist of doing the final Build and Packaging it up ready for your Sprint Review meeting. You would then create a read-only branch that represents the code you “shipped”. This is really an Audit trail branch that is optional, but is good practice. You could use a Label, but Labels are not Auditable and if a dispute was raised by the customer you can produce a verifiable version of the source code for an independent party to check. Rare I know, but you do not want to be at the wrong end of a legal battle. Like the Release branch the RTM branch should never be deleted, or only deleted according to your companies legal policy, which in the UK is usually 7 years.   Figure: If you have made any changes in the Release you will need to merge back up to Main in order to finalise the changes. Nothing is really ever done until it is in Main. The same rules apply when merging any fixes in the Release branch back into Main and you should do a reverse merge before a forward merge, again for the muscle memory more than necessity at this stage. Your Sprint is now nearly complete, and you can have a Sprint Review meeting knowing that you have made every effort and taken every precaution to protect your customer’s investment. Note: In order to really achieve protection for both you and your client you would add Automated Builds, Automated Tests, Automated Acceptance tests, Acceptance test tracking, Unit Tests, Load tests, Web test and all the other good engineering practices that help produce reliable software.     Figure: After the Sprint Planning meeting the process begins again. Where the Sprint Review and Retrospective meetings mark the end of the Sprint, the Sprint Planning meeting marks the beginning. After you have completed your Sprint Planning and you know what you are trying to achieve in Sprint 2 you can create your new Branch to develop in. How do we handle a bug(s) in production that can’t wait? Although in Scrum the only work done should be on the backlog there should be a little buffer added to the Sprint Planning for contingencies. One of these contingencies is a bug in the current release that can’t wait for the Sprint to finish. But how do you handle that? Willy-Peter Schaub asked an excellent question on the release activities: In reality Sprint 2 starts when sprint 1 ends + weekend. Should we not cater for a possible parallelism between Sprint 2 and the release activities of sprint 1? It would introduce FI’s from main to sprint 2, I guess. Your “Figure: Merging print 2 back into Main.” covers, what I tend to believe to be reality in most cases. - Willy-Peter Schaub, VS ALM Ranger, Microsoft I agree, and if you have a single Scrum team then your resources are limited. The Scrum Team is responsible for packaging and release, so at least one run at stabilization, package and release should be included in the Sprint time box. If more are needed on the current production release during the Sprint 2 time box then resource needs to be pulled from Sprint 2. The Product Owner and the Team have four choices (in order of disruption/cost): Backlog: Add the bug to the backlog and fix it in the next Sprint Buffer Time: Use any buffer time included in the current Sprint to fix the bug quickly Make time: Remove a Story from the current Sprint that is of equal value to the time lost fixing the bug(s) and releasing. Note: The Team must agree that it can still meet the Sprint Goal. Cancel Sprint: Cancel the sprint and concentrate all resource on fixing the bug(s) Note: This can be a very costly if the current sprint has already had a lot of work completed as it will be lost. The choice will depend on the complexity and severity of the bug(s) and both the Product Owner and the Team need to agree. In this case we will go with option #2 or #3 as they are uncomplicated but severe bugs. Figure: Real world issue where a bug needs fixed in the current release. If the bug(s) is urgent enough then then your only option is to fix it in place. You can edit the release branch to find and fix the bug, hopefully creating a test so it can’t happen again. Follow the prior process and conduct an internal and customer “Test Please” before releasing. You can read about how to conduct a Test Please on our Rules to Successful Projects: Do you conduct an internal "test please" prior to releasing a version to a client?   Figure: After you have fixed the bug you need to ship again. You then need to again create an RTM branch to hold the version of the code you released in escrow.   Figure: Main is now out of sync with your Release. We now need to get these new changes back up into the Main branch. Do a reverse and then forward merge again to get the new code into Main. But what about the branch, are developers not working on Sprint 2? Does Sprint 2 now have changes that are not in Main and Main now have changes that are not in Sprint 2? Well, yes… and this is part of the hit you take doing branching. But would this scenario even have been possible without branching?   Figure: Getting the changes in Main into Sprint 2 is very important. The Team now needs to do a Forward Integration merge into their Sprint and resolve any conflicts that occur. Maybe the bug has already been fixed in Sprint 2, maybe the bug no longer exists! This needs to be identified and resolved by the developers before they continue to get further out of Sync with Main. Note: Avoid the “Big bang merge” at all costs.   Figure: Merging Sprint 2 back into Main, the Forward Integration, and R0 terminates. Sprint 2 now merges (Reverse Integration) back into Main following the procedures we have already established.   Figure: The logical conclusion. This then allows the creation of the next release. By now you should be getting the big picture and hopefully you learned something useful from this post. I know I have enjoyed writing it as I find these exploratory posts coupled with real world experience really help harden my understanding.  Branching is a tool; it is not a silver bullet. Don’t over use it, and avoid “Anti-Patterns” where possible. Although the diagram above looks complicated I hope showing you how it is formed simplifies it as much as possible.   Technorati Tags: Branching,Scrum,VS ALM,TFS 2010,VS2010

    Read the article

  • Ancillary Objects: Separate Debug ELF Files For Solaris

    - by Ali Bahrami
    We introduced a new object ELF object type in Solaris 11 Update 1 called the Ancillary Object. This posting describes them, using material originally written during their development, the PSARC arc case, and the Solaris Linker and Libraries Manual. ELF objects contain allocable sections, which are mapped into memory at runtime, and non-allocable sections, which are present in the file for use by debuggers and observability tools, but which are not mapped or used at runtime. Typically, all of these sections exist within a single object file. Ancillary objects allow them to instead go into a separate file. There are different reasons given for wanting such a feature. One can debate whether the added complexity is worth the benefit, and in most cases it is not. However, one important case stands out — customers with very large 32-bit objects who are not ready or able to make the transition to 64-bits. We have customers who build extremely large 32-bit objects. Historically, the debug sections in these objects have used the stabs format, which is limited, but relatively compact. In recent years, the industry has transitioned to the powerful but verbose DWARF standard. In some cases, the size of these debug sections is large enough to push the total object file size past the fundamental 4GB limit for 32-bit ELF object files. The best, and ultimately only, solution to overly large objects is to transition to 64-bits. However, consider environments where: Hundreds of users may be executing the code on large shared systems. (32-bits use less memory and bus bandwidth, and on sparc runs just as fast as 64-bit code otherwise). Complex finely tuned code, where the original authors may no longer be available. Critical production code, that was expensive to qualify and bring online, and which is otherwise serving its intended purpose without issue. Users in these risk adverse and/or high scale categories have good reasons to push 32-bits objects to the limit before moving on. Ancillary objects offer these users a longer runway. Design The design of ancillary objects is intended to be simple, both to help human understanding when examining elfdump output, and to lower the bar for debuggers such as dbx to support them. The primary and ancillary objects have the same set of section headers, with the same names, in the same order (i.e. each section has the same index in both files). A single added section of type SHT_SUNW_ANCILLARY is added to both objects, containing information that allows a debugger to identify and validate both files relative to each other. Given one of these files, the ancillary section allows you to identify the other. Allocable sections go in the primary object, and non-allocable ones go into the ancillary object. A small set of non-allocable objects, notably the symbol table, are copied into both objects. As noted above, most sections are only written to one of the two objects, but both objects have the same section header array. The section header in the file that does not contain the section data is tagged with the SHF_SUNW_ABSENT section header flag to indicate its placeholder status. Compiler writers and others who produce objects can set the SUNW_SHF_PRIMARY section header flag to mark non-allocable sections that should go to the primary object rather than the ancillary. If you don't request an ancillary object, the Solaris ELF format is unchanged. Users who don't use ancillary objects do not pay for the feature. This is important, because they exist to serve a small subset of our users, and must not complicate the common case. If you do request an ancillary object, the runtime behavior of the primary object will be the same as that of a normal object. There is no added runtime cost. The primary and ancillary object together represent a logical single object. This is facilitated by the use of a single set of section headers. One can easily imagine a tool that can merge a primary and ancillary object into a single file, or the reverse. (Note that although this is an interesting intellectual exercise, we don't actually supply such a tool because there's little practical benefit above and beyond using ld to create the files). Among the benefits of this approach are: There is no need for per-file symbol tables to reflect the contents of each file. The same symbol table that would be produced for a standard object can be used. The section contents are identical in either case — there is no need to alter data to accommodate multiple files. It is very easy for a debugger to adapt to these new files, and the processing involved can be encapsulated in input/output routines. Most of the existing debugger implementation applies without modification. The limit of a 4GB 32-bit output object is now raised to 4GB of code, and 4GB of debug data. There is also the future possibility (not currently supported) to support multiple ancillary objects, each of which could contain up to 4GB of additional debug data. It must be noted however that the 32-bit DWARF debug format is itself inherently 32-bit limited, as it uses 32-bit offsets between debug sections, so the ability to employ multiple ancillary object files may not turn out to be useful. Using Ancillary Objects (From the Solaris Linker and Libraries Guide) By default, objects contain both allocable and non-allocable sections. Allocable sections are the sections that contain executable code and the data needed by that code at runtime. Non-allocable sections contain supplemental information that is not required to execute an object at runtime. These sections support the operation of debuggers and other observability tools. The non-allocable sections in an object are not loaded into memory at runtime by the operating system, and so, they have no impact on memory use or other aspects of runtime performance no matter their size. For convenience, both allocable and non-allocable sections are normally maintained in the same file. However, there are situations in which it can be useful to separate these sections. To reduce the size of objects in order to improve the speed at which they can be copied across wide area networks. To support fine grained debugging of highly optimized code requires considerable debug data. In modern systems, the debugging data can easily be larger than the code it describes. The size of a 32-bit object is limited to 4 Gbytes. In very large 32-bit objects, the debug data can cause this limit to be exceeded and prevent the creation of the object. To limit the exposure of internal implementation details. Traditionally, objects have been stripped of non-allocable sections in order to address these issues. Stripping is effective, but destroys data that might be needed later. The Solaris link-editor can instead write non-allocable sections to an ancillary object. This feature is enabled with the -z ancillary command line option. $ ld ... -z ancillary[=outfile] ...By default, the ancillary file is given the same name as the primary output object, with a .anc file extension. However, a different name can be provided by providing an outfile value to the -z ancillary option. When -z ancillary is specified, the link-editor performs the following actions. All allocable sections are written to the primary object. In addition, all non-allocable sections containing one or more input sections that have the SHF_SUNW_PRIMARY section header flag set are written to the primary object. All remaining non-allocable sections are written to the ancillary object. The following non-allocable sections are written to both the primary object and ancillary object. .shstrtab The section name string table. .symtab The full non-dynamic symbol table. .symtab_shndx The symbol table extended index section associated with .symtab. .strtab The non-dynamic string table associated with .symtab. .SUNW_ancillary Contains the information required to identify the primary and ancillary objects, and to identify the object being examined. The primary object and all ancillary objects contain the same array of sections headers. Each section has the same section index in every file. Although the primary and ancillary objects all define the same section headers, the data for most sections will be written to a single file as described above. If the data for a section is not present in a given file, the SHF_SUNW_ABSENT section header flag is set, and the sh_size field is 0. This organization makes it possible to acquire a full list of section headers, a complete symbol table, and a complete list of the primary and ancillary objects from either of the primary or ancillary objects. The following example illustrates the underlying implementation of ancillary objects. An ancillary object is created by adding the -z ancillary command line option to an otherwise normal compilation. The file utility shows that the result is an executable named a.out, and an associated ancillary object named a.out.anc. $ cat hello.c #include <stdio.h> int main(int argc, char **argv) { (void) printf("hello, world\n"); return (0); } $ cc -g -zancillary hello.c $ file a.out a.out.anc a.out: ELF 32-bit LSB executable 80386 Version 1 [FPU], dynamically linked, not stripped, ancillary object a.out.anc a.out.anc: ELF 32-bit LSB ancillary 80386 Version 1, primary object a.out $ ./a.out hello worldThe resulting primary object is an ordinary executable that can be executed in the usual manner. It is no different at runtime than an executable built without the use of ancillary objects, and then stripped of non-allocable content using the strip or mcs commands. As previously described, the primary object and ancillary objects contain the same section headers. To see how this works, it is helpful to use the elfdump utility to display these section headers and compare them. The following table shows the section header information for a selection of headers from the previous link-edit example. Index Section Name Type Primary Flags Ancillary Flags Primary Size Ancillary Size 13 .text PROGBITS ALLOC EXECINSTR ALLOC EXECINSTR SUNW_ABSENT 0x131 0 20 .data PROGBITS WRITE ALLOC WRITE ALLOC SUNW_ABSENT 0x4c 0 21 .symtab SYMTAB 0 0 0x450 0x450 22 .strtab STRTAB STRINGS STRINGS 0x1ad 0x1ad 24 .debug_info PROGBITS SUNW_ABSENT 0 0 0x1a7 28 .shstrtab STRTAB STRINGS STRINGS 0x118 0x118 29 .SUNW_ancillary SUNW_ancillary 0 0 0x30 0x30 The data for most sections is only present in one of the two files, and absent from the other file. The SHF_SUNW_ABSENT section header flag is set when the data is absent. The data for allocable sections needed at runtime are found in the primary object. The data for non-allocable sections used for debugging but not needed at runtime are placed in the ancillary file. A small set of non-allocable sections are fully present in both files. These are the .SUNW_ancillary section used to relate the primary and ancillary objects together, the section name string table .shstrtab, as well as the symbol table.symtab, and its associated string table .strtab. It is possible to strip the symbol table from the primary object. A debugger that encounters an object without a symbol table can use the .SUNW_ancillary section to locate the ancillary object, and access the symbol contained within. The primary object, and all associated ancillary objects, contain a .SUNW_ancillary section that allows all the objects to be identified and related together. $ elfdump -T SUNW_ancillary a.out a.out.anc a.out: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0x8724 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 a.out.anc: Ancillary Section: .SUNW_ancillary index tag value [0] ANC_SUNW_CHECKSUM 0xfbe2 [1] ANC_SUNW_MEMBER 0x1 a.out [2] ANC_SUNW_CHECKSUM 0x8724 [3] ANC_SUNW_MEMBER 0x1a3 a.out.anc [4] ANC_SUNW_CHECKSUM 0xfbe2 [5] ANC_SUNW_NULL 0 The ancillary sections for both objects contain the same number of elements, and are identical except for the first element. Each object, starting with the primary object, is introduced with a MEMBER element that gives the file name, followed by a CHECKSUM that identifies the object. In this example, the primary object is a.out, and has a checksum of 0x8724. The ancillary object is a.out.anc, and has a checksum of 0xfbe2. The first element in a .SUNW_ancillary section, preceding the MEMBER element for the primary object, is always a CHECKSUM element, containing the checksum for the file being examined. The presence of a .SUNW_ancillary section in an object indicates that the object has associated ancillary objects. The names of the primary and all associated ancillary objects can be obtained from the ancillary section from any one of the files. It is possible to determine which file is being examined from the larger set of files by comparing the first checksum value to the checksum of each member that follows. Debugger Access and Use of Ancillary Objects Debuggers and other observability tools must merge the information found in the primary and ancillary object files in order to build a complete view of the object. This is equivalent to processing the information from a single file. This merging is simplified by the primary object and ancillary objects containing the same section headers, and a single symbol table. The following steps can be used by a debugger to assemble the information contained in these files. Starting with the primary object, or any of the ancillary objects, locate the .SUNW_ancillary section. The presence of this section identifies the object as part of an ancillary group, contains information that can be used to obtain a complete list of the files and determine which of those files is the one currently being examined. Create a section header array in memory, using the section header array from the object being examined as an initial template. Open and read each file identified by the .SUNW_ancillary section in turn. For each file, fill in the in-memory section header array with the information for each section that does not have the SHF_SUNW_ABSENT flag set. The result will be a complete in-memory copy of the section headers with pointers to the data for all sections. Once this information has been acquired, the debugger can proceed as it would in the single file case, to access and control the running program. Note - The ELF definition of ancillary objects provides for a single primary object, and an arbitrary number of ancillary objects. At this time, the Oracle Solaris link-editor only produces a single ancillary object containing all non-allocable sections. This may change in the future. Debuggers and other observability tools should be written to handle the general case of multiple ancillary objects. ELF Implementation Details (From the Solaris Linker and Libraries Guide) To implement ancillary objects, it was necessary to extend the ELF format to add a new object type (ET_SUNW_ANCILLARY), a new section type (SHT_SUNW_ANCILLARY), and 2 new section header flags (SHF_SUNW_ABSENT, SHF_SUNW_PRIMARY). In this section, I will detail these changes, in the form of diffs to the Solaris Linker and Libraries manual. Part IV ELF Application Binary Interface Chapter 13: Object File Format Object File Format Edit Note: This existing section at the beginning of the chapter describes the ELF header. There's a table of object file types, which now includes the new ET_SUNW_ANCILLARY type. e_type Identifies the object file type, as listed in the following table. NameValueMeaning ET_NONE0No file type ET_REL1Relocatable file ET_EXEC2Executable file ET_DYN3Shared object file ET_CORE4Core file ET_LOSUNW0xfefeStart operating system specific range ET_SUNW_ANCILLARY0xfefeAncillary object file ET_HISUNW0xfefdEnd operating system specific range ET_LOPROC0xff00Start processor-specific range ET_HIPROC0xffffEnd processor-specific range Sections Edit Note: This overview section defines the section header structure, and provides a high level description of known sections. It was updated to define the new SHF_SUNW_ABSENT and SHF_SUNW_PRIMARY flags and the new SHT_SUNW_ANCILLARY section. ... sh_type Categorizes the section's contents and semantics. Section types and their descriptions are listed in Table 13-5. sh_flags Sections support 1-bit flags that describe miscellaneous attributes. Flag definitions are listed in Table 13-8. ... Table 13-5 ELF Section Types, sh_type NameValue . . . SHT_LOSUNW0x6fffffee SHT_SUNW_ancillary0x6fffffee . . . ... SHT_LOSUNW - SHT_HISUNW Values in this inclusive range are reserved for Oracle Solaris OS semantics. SHT_SUNW_ANCILLARY Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section. ... Table 13-8 ELF Section Attribute Flags NameValue . . . SHF_MASKOS0x0ff00000 SHF_SUNW_NODISCARD0x00100000 SHF_SUNW_ABSENT0x00200000 SHF_SUNW_PRIMARY0x00400000 SHF_MASKPROC0xf0000000 . . . ... SHF_SUNW_ABSENT Indicates that the data for this section is not present in this file. When ancillary objects are created, the primary object and any ancillary objects, will all have the same section header array, to facilitate merging them to form a complete view of the object, and to allow them to use the same symbol tables. Each file contains a subset of the section data. The data for allocable sections is written to the primary object while the data for non-allocable sections is written to an ancillary file. The SHF_SUNW_ABSENT flag is used to indicate that the data for the section is not present in the object being examined. When the SHF_SUNW_ABSENT flag is set, the sh_size field of the section header must be 0. An application encountering an SHF_SUNW_ABSENT section can choose to ignore the section, or to search for the section data within one of the related ancillary files. SHF_SUNW_PRIMARY The default behavior when ancillary objects are created is to write all allocable sections to the primary object and all non-allocable sections to the ancillary objects. The SHF_SUNW_PRIMARY flag overrides this behavior. Any output section containing one more input section with the SHF_SUNW_PRIMARY flag set is written to the primary object without regard for its allocable status. ... Two members in the section header, sh_link, and sh_info, hold special information, depending on section type. Table 13-9 ELF sh_link and sh_info Interpretation sh_typesh_linksh_info . . . SHT_SUNW_ANCILLARY The section header index of the associated string table. 0 . . . Special Sections Edit Note: This section describes the sections used in Solaris ELF objects, using the types defined in the previous description of section types. It was updated to define the new .SUNW_ancillary (SHT_SUNW_ANCILLARY) section. Various sections hold program and control information. Sections in the following table are used by the system and have the indicated types and attributes. Table 13-10 ELF Special Sections NameTypeAttribute . . . .SUNW_ancillarySHT_SUNW_ancillaryNone . . . ... .SUNW_ancillary Present when a given object is part of a group of ancillary objects. Contains information required to identify all the files that make up the group. See Ancillary Section for details. ... Ancillary Section Edit Note: This new section provides the format reference describing the layout of a .SUNW_ancillary section and the meaning of the various tags. Note that these sections use the same tag/value concept used for dynamic and capabilities sections, and will be familiar to anyone used to working with ELF. In addition to the primary output object, the Solaris link-editor can produce one or more ancillary objects. Ancillary objects contain non-allocable sections that would normally be written to the primary object. When ancillary objects are produced, the primary object and all of the associated ancillary objects contain a SHT_SUNW_ancillary section, containing information that identifies these related objects. Given any one object from such a group, the ancillary section provides the information needed to identify and interpret the others. This section contains an array of the following structures. See sys/elf.h. typedef struct { Elf32_Word a_tag; union { Elf32_Word a_val; Elf32_Addr a_ptr; } a_un; } Elf32_Ancillary; typedef struct { Elf64_Xword a_tag; union { Elf64_Xword a_val; Elf64_Addr a_ptr; } a_un; } Elf64_Ancillary; For each object with this type, a_tag controls the interpretation of a_un. a_val These objects represent integer values with various interpretations. a_ptr These objects represent file offsets or addresses. The following ancillary tags exist. Table 13-NEW1 ELF Ancillary Array Tags NameValuea_un ANC_SUNW_NULL0Ignored ANC_SUNW_CHECKSUM1a_val ANC_SUNW_MEMBER2a_ptr ANC_SUNW_NULL Marks the end of the ancillary section. ANC_SUNW_CHECKSUM Provides the checksum for a file in the c_val element. When ANC_SUNW_CHECKSUM precedes the first instance of ANC_SUNW_MEMBER, it provides the checksum for the object from which the ancillary section is being read. When it follows an ANC_SUNW_MEMBER tag, it provides the checksum for that member. ANC_SUNW_MEMBER Specifies an object name. The a_ptr element contains the string table offset of a null-terminated string, that provides the file name. An ancillary section must always contain an ANC_SUNW_CHECKSUM before the first instance of ANC_SUNW_MEMBER, identifying the current object. Following that, there should be an ANC_SUNW_MEMBER for each object that makes up the complete set of objects. Each ANC_SUNW_MEMBER should be followed by an ANC_SUNW_CHECKSUM for that object. A typical ancillary section will therefore be structured as: TagMeaning ANC_SUNW_CHECKSUMChecksum of this object ANC_SUNW_MEMBERName of object #1 ANC_SUNW_CHECKSUMChecksum for object #1 . . . ANC_SUNW_MEMBERName of object N ANC_SUNW_CHECKSUMChecksum for object N ANC_SUNW_NULL An object can therefore identify itself by comparing the initial ANC_SUNW_CHECKSUM to each of the ones that follow, until it finds a match. Related Other Work The GNU developers have also encountered the need/desire to support separate debug information files, and use the solution detailed at http://sourceware.org/gdb/onlinedocs/gdb/Separate-Debug-Files.html. At the current time, the separate debug file is constructed by building the standard object first, and then copying the debug data out of it in a separate post processing step, Hence, it is limited to a total of 4GB of code and debug data, just as a single object file would be. They are aware of this, and I have seen online comments indicating that they may add direct support for generating these separate files to their link-editor. It is worth noting that the GNU objcopy utility is available on Solaris, and that the Studio dbx debugger is able to use these GNU style separate debug files even on Solaris. Although this is interesting in terms giving Linux users a familiar environment on Solaris, the 4GB limit means it is not an answer to the problem of very large 32-bit objects. We have also encountered issues with objcopy not understanding Solaris-specific ELF sections, when using this approach. The GNU community also has a current effort to adapt their DWARF debug sections in order to move them to separate files before passing the relocatable objects to the linker. The details of Project Fission can be found at http://gcc.gnu.org/wiki/DebugFission. The goal of this project appears to be to reduce the amount of data seen by the link-editor. The primary effort revolves around moving DWARF data to separate .dwo files so that the link-editor never encounters them. The details of modifying the DWARF data to be usable in this form are involved — please see the above URL for details.

    Read the article

< Previous Page | 248 249 250 251 252 253 254  | Next Page >