Search Results

Search found 766 results on 31 pages for 'simplicity'.

Page 29/31 | < Previous Page | 25 26 27 28 29 30 31  | Next Page >

  • Help with SVN+SSH permissions with CentOS/WHM setup

    - by Furiam
    Hi Folks, I'll try my best to explain how I'm trying to set up this system. Imagine a production server running WHM with various sites. We'll call these sites... site1, site2, site2 Now, with the WHM setup, each site has a user/group defined for them, we'll keep these users/groups called site1,site2 for simplicity reasons. Now, updating these sites is accomplished using SVN, and through the use of a post commit script to auto update these sites (With .svn blocked through the apache configuration). There are two regular maintainers of these sites, we'll call them Joe and Bob. Joe and Bob both have commandline access to the server through thier respective limited accounts. So I've done the easy bit, managed to get SVN working with these "maintainers" so that when an SVN commit occurs, the changes are checked out and go live perfectly. Here's the cavet, and ultimately my problem. User permissions. Through my testing of this setup, I've only managed to get it working by giving what is being updated permissions of 777, so that Joe and Bob can both read and write access to webfront directories for each of the sites. So, an example of how it's set up now: Joe and Bob both belong to a group called "Dev". I have the master /svn folders set up for both read and write access to this group, and it works great. Post commit triggers, updates the site, and then sets 777 on each file within the webfront. I then changed this to try and factor in group permission updates, instead of straight 777. Each folder in /home/site1/public_html intially gets given a chmod of 664, and each folder 775 Which looks a little something like this drwxrwxr-x . drwxrwxr-x .. drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- site1 site1 my_test_file So site1 is sthe owner and group owner of those files and folders. So I then added site1 to Joe and Bobs secondary groups so that the SVN update will correctly allow access to these files. Herein lies the problem now. When I wish to add a file or folder to /home/site1, say Bobs_file, it then looks like this drwxrwxr-x . drwxrwxr-x .. drwxr-xr-x Bob dev bobs_folder drwxrwxr-x site1 site1 my_test_folder -rw-rw-r-- Bob dev bobs_file -rw-rw-r-- site1 site1 my_test_file How can I get it so that with the set of user permissions Bob has available, to change the owner and group owner of that file to reflect "site1" "site1". As Bob belongs to Dev I can set the permissions correctly with CHMOd, but It appears CHGRP is throwing back operation errors. Now this was long winded enough to give an overview of exactly what I'm trying to accomplish, just incase I'm going about this arse-over-tit and there's a far easier solution. Here's my goals 2 people to update multiple user accounts specified given the structure of WHM Trying to maintain master user/group permissions of file and folders to the original user account, and not the account of the updatee. I like the security of SVN+SSH over just SVN. Don't want to run all this over root. I hope this made sense, and thanks in advance :)

    Read the article

  • AppDomain dependencies across directories

    - by strager
    I am creating a plugin system and I am creating one AppDomain per plugin. Each plugin has its own directory with its main assemblies and references. The main assemblies will be loaded by my plugin loader, in addition to my interface assemblies (so the plugin can interact with the application). Creating the AppDomain: this.appDomain = AppDomain.CreateDomain("AppDomain", null, new AppDomainSetup { ApplicationBase = pluginPath, PrivateBinPath = pluginPath, }); Loading the assemblies: this.appDomain.Load(myInterfaceAssembly.GetName(true)); var assemblies = new List<Assembly>(); foreach (var assemblyName in this.assemblyNames) { assemblies.Add(this.appDomain.Load(assemblyName)); } The format of assemblyName is the assembly's filename without ".dll". The problem is that AppDomain.Load(assemblyName) throws an exception: Could not load file or assembly '[[assemblyName]], Version=1.0.0.0, Culture=neutral, PublicKeyToken=null' or one of its dependencies. The system cannot find the file specified. All of the dependencies of [[assemblyName]] are: Inside the directory pluginPath, The myInterfaceAssembly which is already loaded, or In the GAC (e.g. mscorelib). Clearly I'm not doing something right. I have tried: Creating an object using this.appDomain.CreateInstanceAndUnwrap inheriting from MarshalByRefObject with a LoadAssembly method to load the assembly. I get an exception saying that the current assembly (containing the proxy class) could not be loaded (file not found, as above), even if I manually call this.appDomain.Load(Assembly.GetExecutingAssembly().GetName(true)). Attaching an AssemblyResolve handler to this.appDomain. I'm met with the same exception as in (1), and manually loading doesn't help. Recursively loading assemblies by loading their dependencies into this.appDomain first. This doesn't work, but I doubt my code is correct: private static void LoadAssemblyInto(AssemblyName assemblyName, AppDomain appDomain) { var assembly = Assembly.Load(assemblyName); foreach (var referenceName in assembly.GetReferencedAssemblies()) { if (!referenceName.FullName.StartsWith("MyProject")) { continue; } var loadedAssemblies = appDomain.GetAssemblies(); if (loadedAssemblies.Any((asm) => asm.FullName == referenceName.FullName)) { continue; } LoadAssemblyInto(referenceName, appDomain); } appDomain.Load(assembly.GetName(true)); } How can I load my plugin assembly with its dependencies in that plugin's directory while also loading some assemblies in the current directory? Note: The assemblies a plugin may (probably will) reference are already loaded in the current domain. This can be shared across domains (performance benefit? simplicity?) if required. Fusion log: *** Assembly Binder Log Entry (12/24/2010 @ 10:46:40 AM) *** The operation failed. Bind result: hr = 0x80070002. The system cannot find the file specified. Assembly manager loaded from: C:\Windows\Microsoft.NET\Framework\v4.0.30319\clr.dll Running under executable C:\MyProject\bin\Debug\MyProject.vshost.exe --- A detailed error log follows. LOG: Start binding of native image vshost32, Version=10.0.0.0, Culture=neutral, PublicKeyToken=b03f5f7f11d50a3a, processorArchitecture=x86. WRN: No matching native image found. LOG: Bind to native image assembly did not succeed. Use IL image. LOG: IL assembly loaded from C:\MyProject\bin\Debug\MyProject.vshost.exe

    Read the article

  • Freezes (not crashes) with GCD, blocks and Core Data

    - by Lukasz
    I have recently rewritten my Core Data driven database controller to use Grand Central Dispatch to manage fetching and importing in the background. Controller can operate on 2 NSManagedContext's: NSManagedObjectContext *mainMoc instance variable for main thread. this contexts is used only by quick access for UI by main thread or by dipatch_get_main_queue() global queue. NSManagedObjectContext *bgMoc for background tasks (importing and fetching data for NSFetchedresultsController for tables). This background tasks are fired ONLY by user defined queue: dispatch_queue_t bgQueue (instance variable in database controller object). Fetching data for tables is done in background to not block user UI when bigger or more complicated predicates are performed. Example fetching code for NSFetchedResultsController in my table view controllers: -(void)fetchData{ dispatch_async([CDdb db].bgQueue, ^{ NSError *error = nil; [[self.fetchedResultsController fetchRequest] setPredicate:self.predicate]; if (self.fetchedResultsController && ![self.fetchedResultsController performFetch:&error]) { NSSLog(@"Unresolved error in fetchData %@", error); } if (!initial_fetch_attampted)initial_fetch_attampted = YES; fetching = NO; dispatch_async(dispatch_get_main_queue(), ^{ [self.table reloadData]; [self.table scrollRectToVisible:CGRectMake(0, 0, 100, 20) animated:YES]; }); }); } // end of fetchData function bgMoc merges with mainMoc on save using NSManagedObjectContextDidSaveNotification: - (void)bgMocDidSave:(NSNotification *)saveNotification { // CDdb - bgMoc didsave - merging changes with main mainMoc dispatch_async(dispatch_get_main_queue(), ^{ [self.mainMoc mergeChangesFromContextDidSaveNotification:saveNotification]; // Extra notification for some other, potentially interested clients [[NSNotificationCenter defaultCenter] postNotificationName:DATABASE_SAVED_WITH_CHANGES object:saveNotification]; }); } - (void)mainMocDidSave:(NSNotification *)saveNotification { // CDdb - main mainMoc didSave - merging changes with bgMoc dispatch_async(self.bgQueue, ^{ [self.bgMoc mergeChangesFromContextDidSaveNotification:saveNotification]; }); } NSfetchedResultsController delegate has only one method implemented (for simplicity): - (void)controllerDidChangeContent:(NSFetchedResultsController *)controller { dispatch_async(dispatch_get_main_queue(), ^{ [self fetchData]; }); } This way I am trying to follow Apple recommendation for Core Data: 1 NSManagedObjectContext per thread. I know this pattern is not completely clean for at last 2 reasons: bgQueue not necessarily fires the same thread after suspension but since it is serial, it should not matter much (there is never 2 threads trying access bgMoc NSManagedObjectContext dedicated to it). Sometimes table view data source methods will ask NSFetchedResultsController for info from bgMoc (since fetch is done on bgQueue) like sections count, fetched objects in section count, etc.... Event with this flaws this approach works pretty well of the 95% of application running time until ... AND HERE GOES MY QUESTION: Sometimes, very randomly application freezes but not crashes. It does not response on any touch and the only way to get it back to live is to restart it completely (switching back to and from background does not help). No exception is thrown and nothing is printed to the console (I have Breakpoints set for all exception in Xcode). I have tried to debug it using Instruments (time profiles especially) to see if there is something hard going on on main thread but nothing is showing up. I am aware that GCD and Core Data are the main suspects here, but I have no idea how to track / debug this. Let me point out, that this also happens when I dispatch all the tasks to the queues asynchronously only (using dispatch_async everywhere). This makes me think it is not just standard deadlock. Is there any possibility or hints of how could I get more info what is going on? Some extra debug flags, Instruments magical tricks or build setting etc... Any suggestions on what could be the cause are very much appreciated as well as (or) pointers to how to implement background fetching for NSFetchedResultsController and background importing in better way.

    Read the article

  • WFP: How do you properly Bind a DependencyProperty to the GUI

    - by Robert Ross
    I have the following class (abreviated for simplicity). The app it multi-threaded so the Set and Get are a bit more complicated but should be ok. namespace News.RSS { public class FeedEngine : DependencyObject { public static readonly DependencyProperty _processing = DependencyProperty.Register("Processing", typeof(bool), typeof(FeedEngine), new FrameworkPropertyMetadata(true, FrameworkPropertyMetadataOptions.AffectsRender)); public bool Processing { get { return (bool)this.Dispatcher.Invoke( DispatcherPriority.Normal, (DispatcherOperationCallback)delegate { return GetValue(_processing); }, Processing); } set { this.Dispatcher.BeginInvoke(DispatcherPriority.Normal, (SendOrPostCallback)delegate { SetValue(_processing, value); }, value); } } public void Poll() { while (Running) { Processing = true; //Do my work to read the data feed from remote source Processing = false; Thread.Sleep(PollRate); } // } } } Next I have my main form as the following: <Window x:Class="News.Main" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:converter="clr-namespace:News.Converters" xmlns:local="clr-namespace:News.Lookup" xmlns:rss="clr-namespace:News.RSS" Title="News" Height="521" Width="927" Initialized="Window_Initialized" Closing="Window_Closing" > <Window.Resources> <ResourceDictionary> <converter:BooleanConverter x:Key="boolConverter" /> <converter:ArithmeticConverter x:Key="arithConverter" /> ... </ResourceDictionary> </Window.Resources> <DockPanel Name="dockPanel1" SnapsToDevicePixels="False" > <ToolBarPanel Height="37" Name="toolBarPanel" Orientation="Horizontal" DockPanel.Dock="Top" > <ToolBarPanel.Children> <Button DataContext="{DynamicResource FeedEngine}" HorizontalAlignment="Right" Name="btnSearch" ToolTip="Search" Click="btnSearch_Click" IsEnabled="{Binding Path=Processing, Converter={StaticResource boolConverter}}"> <Image Width="32" Height="32" Name="imgSearch" Source="{Resx ResxName=News.Properties.Resources, Key=Search}" /> </Button> ... </DockPanel> </Window> As you can see I set the DataContext to FeedEngine and Bind IsEnabled to Processing. I have also tested the boolConverter separately and it functions (just applies ! (Not) to a bool). Here is my Main window code behind in case it helps to debug. namespace News { /// <summary> /// Interaction logic for Main.xaml /// </summary> public partial class Main : Window { public FeedEngine _engine; List<NewsItemControl> _newsItems = new List<NewsItemControl>(); Thread _pollingThread; public Main() { InitializeComponent(); this.Show(); } private void Window_Initialized(object sender, EventArgs e) { // Load current Feed data. _engine = new FeedEngine(); ThreadStart start = new ThreadStart(_engine.Poll); _pollingThread = new Thread(start); _pollingThread.Start(); } } } Hope someone can see where I missed a step. Thanks.

    Read the article

  • Databinding to ObservableCollection in a different UserControl - how to preserve current selections?

    - by Dave
    Scope of question expanded on 2010-03-25 I ended up figuring out my problem, but here's a new problem that came up as a result of solving the original question, because I want to be able to award the bounty to someone!!! Once I figured out my problem, I soon found out that when the ObservableCollection updates, the databound ComboBox has its contents repopulated, but most of the selections have been blanked out. I assume that in this case, MVVM is going to make it difficult for me to remember the last selected item. I have an idea, but it seems a little nasty. I'll award the bounty to whomever comes up with a nice solution for this! Question re-written on 2010-03-24 I have two UserControls, where one is a dialog that has a TabControl, and the other is one that appears within said TabControl. I'll just call them CandyDialog and CandyNameViewer for simplicity's sake. There's also a data management class called Tracker that manages information storage, which for all intents and purposes just exposes a public property that is an ObservableCollection. I display the CandyNameViewer in CandyDialog via code behind, like this: private void CandyDialog_Loaded( object sender, RoutedEventArgs e) { _candyviewer = new CandyViewer(); _candyviewer.DataContext = _tracker; candy_tab.Content = _candyviewer; } The CandyViewer's XAML looks like this (edited for kaxaml): <Page xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <Page.Resources> <DataTemplate x:Key="CandyItemTemplate"> <Grid> <Grid.ColumnDefinitions> <ColumnDefinition Width="120"></ColumnDefinition> <ColumnDefinition Width="150"></ColumnDefinition> </Grid.ColumnDefinitions> <TextBox Grid.Column="0" Text="{Binding CandyName}" Margin="3"></TextBox> <!-- just binding to DataContext ends up using InventoryItem as parent, so we need to get to the UserControl --> <ComboBox Grid.Column="1" SelectedItem="{Binding SelectedCandy, Mode=TwoWay}" ItemsSource="{Binding RelativeSource={RelativeSource FindAncestor, AncestorType={x:Type UserControl}}, Path=DataContext.CandyNames}" Margin="3"></ComboBox> </Grid> </DataTemplate> </Page.Resources> <Grid> <ListBox DockPanel.Dock="Top" ItemsSource="{Binding CandyBoxContents, Mode=TwoWay}" ItemTemplate="{StaticResource CandyItemTemplate}" /> </Grid> </Page> Now everything works fine when the controls are loaded. As long as CandyNames is populated first, and then the consumer UserControl is displayed, all of the names are there. I obviously don't get any errors in the Output Window or anything like that. The issue I have is that when the ObservableCollection is modified from the model, those changes are not reflected in the consumer UserControl! I've never had this problem before; all of my previous uses of ObservableCollection updated fine, although in those cases I wasn't databinding across assemblies. Although I am currently only adding and removing candy names to/from the ObservableCollection, at a later date I will likely also allow renaming from the model side. Is there something I did wrong? Is there a good way to actually debug this? Reed Copsey indicates here that inter-UserControl databinding is possible. Unfortunately, my favorite Bea Stollnitz article on WPF databinding debugging doesn't suggest anything that I could use for this particular problem.

    Read the article

  • css & horizontal scrolling

    - by zen
    One of my most favorite websites is that of the Oxford Hotel in Romania. I like the simplicity of the site and how it flows. I am trying to create a similar scrolling effect using jquery and I've been somewhat successful to a point. My trouble is with css... I am not a wizard in that department. Anyway,...my questions! 1. How can I first make sure that the ".box" class will be in the center of the page when the corresponding link is clicked? Right now it positions itself on the left. 2. Then, how can I tweak this code so that the user only can see the width of the screen and not the browser scroller/the rest of my ".box" divs? Refer to the oxford link if you need to see an example of what I'd like to achieve. This is a portion of my current CSS. body { background: #f2f2f2; text-align:left; color:#666; font-size:14px; font-family:georgia, 'time new romans', serif; margin:0 auto; padding:0; } #menu { background: #333333; position: fixed; top: 0px; left: 0; border: 1px solid #000; clear: both; float: left; font-family: helvetica, sans-serif; font-size: 18px; text-transform: uppercase; margin: 0; padding: 18px; z-index: 500; filter: alpha(opacity=75); opacity: .75; } #menu ul{ list-style-type: none; margin: 0; padding: 0; } #menu ul li{ list-style-type: none; color: #777; display: inline; margin: 0; padding: 0 10px 0 10px; } #menu ul li a{ text-decoration: none; list-style-type: none; color: #777; display: inline; margin: 0; padding: 0; } #menu ul li a:hover{ text-decoration: none; list-style-type: none; color: #fff; display: inline; margin: 0; padding: 0; } #container { position: absolute; top: 120px; width: 70000px; margin: 0; padding: 0; } .box { background: white; border: 3px dashed #f2f2f2; width: 600px; float: left; font-size: 10px; line-height: 15px; margin: 0; padding: 5px 30px 30px 30px; }

    Read the article

  • image processing algorithm in MATLAB

    - by user261002
    I am trying to reconstruct an algorithm belong to this paper: Decomposition of biospeckle images in temporary spectral bands Here is an explanation of the algorithm: We recorded a sequence of N successive speckle images with a sampling frequency fs. In this way it was possible to observe how a pixel evolves through the N images. That evolution can be treated as a time series and can be processed in the following way: Each signal corresponding to the evolution of every pixel was used as input to a bank of filters. The intensity values were previously divided by their temporal mean value to minimize local differences in reflectivity or illumination of the object. The maximum frequency that can be adequately analyzed is determined by the sampling theorem and s half of sampling frequency fs. The latter is set by the CCD camera, the size of the image, and the frame grabber. The bank of filters is outlined in Fig. 1. In our case, ten 5° order Butterworth11 filters were used, but this number can be varied according to the required discrimination. The bank was implemented in a computer using MATLAB software. We chose the Butter-worth filter because, in addition to its simplicity, it is maximally flat. Other filters, an infinite impulse response, or a finite impulse response could be used. By means of this bank of filters, ten corresponding signals of each filter of each temporary pixel evolution were obtained as output. Average energy Eb in each signal was then calculated: where pb(n) is the intensity of the filtered pixel in the nth image for filter b divided by its mean value and N is the total number of images. In this way, en values of energy for each pixel were obtained, each of hem belonging to one of the frequency bands in Fig. 1. With these values it is possible to build ten images of the active object, each one of which shows how much energy of time-varying speckle there is in a certain frequency band. False color assignment to the gray levels in the results would help in discrimination. and here is my MATLAB code base on that : clear all for i=0:39 str = num2str(i); str1 = strcat(str,'.mat'); load(str1); D{i+1}=A; end new_max = max(max(A)); new_min = min(min(A)); for i=20:180 for j=20:140 ts = []; for k=1:40 ts = [ts D{k}(i,j)]; %%% kth image pixel i,j --- ts is time series end ts = double(ts); temp = mean(ts); ts = ts-temp; ts = ts/temp; N = 5; % filter order W = [0.00001 0.05;0.05 0.1;0.1 0.15;0.15 0.20;0.20 0.25;0.25 0.30;0.30 0.35;0.35 0.40;0.40 0.45;0.45 0.50]; N1 = 5; for ind = 1:10 Wn = W(ind,:); [B,A] = butter(N1,Wn); ts_f(ind,:) = filter(B,A,ts); end for ind=1:10 imag_test1{ind}(i,j) =sum((ts_f(ind,:)./mean(ts_f(ind,:))).^2); end end end for i=1:10 temp_imag = imag_test1{i}(:,:); x=isnan(temp_imag); temp_imag(x)=0; temp_imag=medfilt2(temp_imag); t_max = max(max(temp_imag)); t_min = min(min(temp_imag)); temp_imag = (temp_imag-t_min).*(double(new_max-new_min)/double(t_max-t_min))+double(new_min); imag_test2{i}(:,:) = temp_imag; end for i=1:10 A=imag_test2{i}(:,:); B=A/max(max(A)); B=histeq(B); figure,imshow(B) colorbar end but I am not getting the same result as paper. has anybody has aby idea why? or where I have gone wrong? Refrence Link to the paper

    Read the article

  • OpenGL-ES Texture Mapping. Texture is reversed?

    - by Feet
    I am trying to get my head around Texture mapping, I thought I had it the other day after asking this. However, I am having some trouble with my texture coordinates being flipped from what I am expecting. I am loading my texture like so int[] textures = new int[1]; gl.glGenTextures(1, textures, 0); _textureID = textures[0]; gl.glBindTexture(GL10.GL_TEXTURE_2D, _textureID); Bitmap bmp = BitmapFactory.decodeResource( _context.getResources(), R.drawable.die_1); GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bmp, 0); gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_LINEAR); gl.glTexParameterx(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); bmp.recycle(); My cube is this float vertices[] = { // Front face -width, -height, depth, // 0 width, -height, depth, // 1 width, height, depth, // 2 -width, height, depth, // 3 // Back Face width, -height, -depth, // 4 -width, -height, -depth, // 5 -width, height, -depth, // 6 width, height, -depth, // 7 // Left face -width, -height, -depth, // 8 -width, -height, depth, // 9 -width, height, depth, // 10 -width, height, -depth, // 11 // Right face width, -height, depth, // 12 width, -height, -depth, // 13 width, height, -depth, // 14 width, height, depth, // 15 // Top face -width, height, depth, // 16 width, height, depth, // 17 width, height, -depth, // 18 -width, height, -depth, // 19 // Bottom face -width, -height, -depth, // 20 width, -height, -depth, // 21 width, -height, depth, // 22 -width, -height, depth, // 23 }; short indices[] = { // Front 0,1,2, 0,2,3, // Back 4,5,6, 4,6,7, // Left 8,9,10, 8,10,11, // Right 12,13,14, 12,14,15, // Top 16,17,18, 16,18,19, // Bottom 20,21,22, 20,22,23, }; float texCoords[] = { // Front face textured only, for simplicity 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f}; And it is drawn like so // Counter-clockwise winding. gl.glFrontFace(GL10.GL_CCW); // Enable face culling. gl.glEnable(GL10.GL_CULL_FACE); // What faces to remove with the face culling. gl.glCullFace(GL10.GL_BACK); // Enabled the vertices buffer for writing and to be used during // rendering. gl.glEnableClientState(GL10.GL_VERTEX_ARRAY); // Specifies the location and data format of an array of vertex // coordinates to use when rendering. gl.glVertexPointer(3, GL10.GL_FLOAT, 0, verticesBuffer); if (normalsBuffer != null) { // Enabled the normal buffer for writing and to be used during rendering. gl.glEnableClientState(GL10.GL_NORMAL_ARRAY); // Specifies the location and data format of an array of normals to use when rendering. gl.glNormalPointer(GL10.GL_FLOAT, 0, normalsBuffer); } // Set flat color gl.glColor4f(rgba[0], rgba[1], rgba[2], rgba[3]); // Smooth color if ( colorBuffer != null ) { // Enable the color array buffer to be used during rendering. gl.glEnableClientState(GL10.GL_COLOR_ARRAY); // Point out the where the color buffer is. gl.glColorPointer(4, GL10.GL_FLOAT, 0, colorBuffer); } // Use textures? if ( textureBuffer != null) { gl.glEnableClientState(GL10.GL_TEXTURE_COORD_ARRAY); gl.glEnable(GL10.GL_TEXTURE_2D); gl.glTexCoordPointer(2, GL10.GL_FLOAT, 0, textureBuffer); } // Translation and rotation before drawing gl.glTranslatef(x, y, z); gl.glRotatef(rx, 1, 0, 0); gl.glRotatef(ry, 0, 1, 0); gl.glRotatef(rz, 0, 0, 1); gl.glDrawElements(GL10.GL_TRIANGLES, numOfIndices, GL10.GL_UNSIGNED_SHORT, indicesBuffer); // Disable the vertices buffer. gl.glDisableClientState(GL10.GL_VERTEX_ARRAY); gl.glDisableClientState(GL10.GL_COLOR_ARRAY); gl.glDisableClientState(GL10.GL_NORMAL_ARRAY); gl.glDisableClientState(GL10.GL_TEXTURE_COORD_ARRAY); // Disable face culling. gl.glDisable(GL10.GL_CULL_FACE); However my front face looks like this I also add, I haven't got any normals set, are textures affected by normals? float texCoords[] = { // Front 1.0f, 1.0f, 0.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f} It seems as if the texture is being flipped, so the coordinates don't match up properly?

    Read the article

  • Accessing Oracle 6i and 9i/10g Databases using C#

    - by Mike M
    Hi all, I am making two build files using NAnt. The first aims to automatically compile Oracle 6i forms and reports and the second aims to compile Oracle 9i/10g forms and reports. Within the NAnt task is a C# script which prompts the developer for database credentials (username, password, database) in order to compile the forms and reports. I want to then run these credentials against the relevant database to ensure the credentials entered are correct and, if they are not, prompt the user to re-enter their credentials. My script currently looks as follows: class GetInput { public static void ScriptMain(Project project) { Console.Clear(); Console.WriteLine("==================================================================="); Console.WriteLine("Welcome to the Compile and Deploy Oracle Forms and Reports Facility"); Console.WriteLine("==================================================================="); Console.WriteLine(); Console.WriteLine("Please enter the acronym of the project to work on from the following list:"); Console.WriteLine(); Console.WriteLine("--------"); Console.WriteLine("- BCS"); Console.WriteLine("- COPEN"); Console.WriteLine("- FCDD"); Console.WriteLine("--------"); Console.WriteLine(); Console.Write("Selection: "); project.Properties["project.type"] = Console.ReadLine(); Console.WriteLine(); Console.Write("Please enter username: "); string username = Console.ReadLine(); project.Properties["username"] = username; string password = ReturnPassword(); project.Properties["password"] = password Console.WriteLine(); Console.Write("Please enter database: "); string database = Console.ReadLine(); project.Properties["database"] = database Console.WriteLine(); //Call method to verify user credentials Console.WriteLine(); Console.WriteLine("Compiling files..."; } public static string ReturnPassword() { Console.Write("Please enter password: "); string password = ""; ConsoleKeyInfo nextKey = Console.ReadKey(true); while (nextKey.Key != ConsoleKey.Enter) { if (nextKey.Key == ConsoleKey.Backspace) { if (password.Length > 0) { password = password.Substring(0, password.Length - 1); Console.Write(nextKey.KeyChar); Console.Write(" "); Console.Write(nextKey.KeyChar); } } else { password += nextKey.KeyChar; Console.Write("*"); } nextKey = Console.ReadKey(true); } return password; } } Having done a bit of research, I find that you can connect to Oracle databases using the System.Data.OracleClient namespace clicky. However, as mentioned in the link, Microsoft is discontinuing support for this so it is not a desireable solution. I have also fonud that Oracle provides its own classes for connecting to Oracle databases clicky. However, this only seems to support connecting to Oracle 9 or newer databases (clicky) so it is not feasible solution as I also need to connect to Oracle 6i databases. I could achieve this by calling a bat script from within the C# script, but I would much prefer to have a single build file for simplicity. Ideally, I would like to run a series of commands such as is contained in the following .bat script: rem -- Set Database SID -- set ORACLE_SID=%DBSID% sqlplus -s %nameofuser%/%password%@%dbsid% set cmdsep on set cmdsep '"'; --" set term on set echo off set heading off select '========================================' || CHR(10) || 'Have checked and found both Password and ' || chr(10) || 'Database Identifier are valid, continuing ...' || CHR(10) || '========================================' from dual; exit; This requires me to set the environment variable of ORACLE_SID and then run sqlplus in silent mode (-s) followed by a series of sql set commands (set x), the actual select statement and an exit command. Can I achieve this within a c# script without calling a bat script, or am I forced to call a bat script? Thanks in advance!

    Read the article

  • Architecture for Qt SIGNAL with subclass-specific, templated argument type

    - by Barry Wark
    I am developing a scientific data acquisition application using Qt. Since I'm not a deep expert in Qt, I'd like some architecture advise from the community on the following problem: The application supports several hardware acquisition interfaces but I would like to provide an common API on top of those interfaces. Each interface has a sample data type and a units for its data. So I'm representing a vector of samples from each device as a std::vector of Boost.Units quantities (i.e. std::vector<boost::units::quantity<unit,sample_type> >). I'd like to use a multi-cast style architecture, where each data source broadcasts newly received data to 1 or more interested parties. Qt's Signal/Slot mechanism is an obvious fit for this style. So, I'd like each data source to emit a signal like typedef std::vector<boost::units::quantity<unit,sample_type> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); for the unit and sample_type appropriate for that device. Since tempalted QObject subclasses aren't supported by the meta-object compiler, there doesn't seem to be a way to have a (tempalted) base class for all data sources which defines the samplesAcquired Signal. In other words, the following won't work: template<T,U> //sample type and units class DataSource : public QObject { Q_OBJECT ... public: typedef std::vector<boost::units::quantity<U,T> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); }; The best option I've been able to come up with is a two-layered approach: template<T,U> //sample type and units class IAcquiredSamples { public: typedef std::vector<boost::units::quantity<U,T> > SampleVector virtual shared_ptr<SampleVector> acquiredData(TimeStamp ts, unsigned long nsamples); }; class DataSource : public QObject { ... signals: void samplesAcquired(TimeStamp ts, unsigned long nsamples); }; The samplesAcquired signal now gives a timestamp and number of samples for the acquisition and clients must use the IAcquiredSamples API to retrieve those samples. Obviously data sources must subclass both DataSource and IAcquiredSamples. The disadvantage of this approach appears to be a loss of simplicity in the API... it would be much nicer if clients could get the acquired samples in the Slot connected. Being able to use Qt's queued connections would also make threading issues easier instead of having to manage them in the acquiredData method within each subclass. One other possibility, is to use a QVariant argument. This necessarily puts the onus on subclass to register their particular sample vector type with Q_REGISTER_METATYPE/qRegisterMetaType. Not really a big deal. Clients of the base class however, will have no way of knowing what type the QVariant value type is, unless a tag struct is also passed with the signal. I consider this solution at least as convoluted as the one above, as it forces clients of the abstract base class API to deal with some of the gnarlier aspects of type system. So, is there a way to achieve the templated signal parameter? Is there a better architecture than the one I've proposed?

    Read the article

  • Using label tags in a validation summary error list?

    - by patridge
    I was thinking about making use of <label> tags in my validation error summary on a failed form submit and I can't figure out if it is going to get me in trouble down the line. Can anyone think of a good reason to avoid this approach? Usability, functionality, design, or other issues are all helpful. I really like the idea of clicking a line item in the error list and being jumped to the offending input element, especially in a mobile HTML scenario where vertical orientation is more common and scrolling is a pain. So far the only problem I can find is that labels don't navigate for radio buttons or checkboxes without individual IDs (Clicking a label for a single ID-tagged radio/checkbox element alters its selection). It doesn't make it any worse than no label, though. Here is a stripped down HTML test sample of this idea (CSS omitted for simplicity). <div class="validation-errors"> <p>There was a problem saving your form.</p> <ul> <li><label for="select1">Select 1 is invalid.</label></li> <li><label for="text1">Text 1 is invalid.</label></li> <li><label for="textarea1">TextArea 1 is invalid.</label></li> <li><label for="radio1">Radio 1 is invalid.</label></li> <li><label for="checkbox1">Checkbox 1 is invalid.</label></li> </ul> </div> <form action="/somewhere"> <fieldset><legend>Some Form</legend> <ol> <li><label for="select1">select1</label> <select id="select1" name="select1"> <option value="value1">Value 1</option> <option value="value2">Value 2</option> <option selected="selected" value="value3">Value 3</option> </select></li> <li><label for="text1">text1</label> <input id="text1" name="text1" type="text" value="sometext" /></li> <li><label for="textarea1">textarea1</label> <textarea id="textarea1" name="textarea1" rows="5" cols="25">sometext</textarea></li> <li><ul> <li><label><input type="radio" name="radio1" value="value1" />Value 1</label></li> <li><label><input type="radio" name="radio1" value="value2" checked="checked" />Value 2</label></li> <li><label><input type="radio" name="radio1" value="value3" />Value 3</label></li> </ul></li> <li><ul> <li><label><input type="checkbox" name="checkbox1" value="value1" checked="checked" />Value 1</label></li> <li><label><input type="checkbox" name="checkbox1" value="value2" />Value 2</label></li> <li><label><input type="checkbox" name="checkbox1" value="value3" checked="checked" />Value 3</label></li> </ul></li> <li><input type="submit" value="Save &amp; Continue" /></li> </ol> </fieldset> </form> The only thing I have added to make the click-capable behavior more obvious is to add a CSS rule for the labels. .validation-errors label { text-decoration: underline; cursor: pointer; }

    Read the article

  • Java multiple class compositing and boiler plate reduction

    - by h2g2java
    We all know why Java does/should not have multiple inheritance. So this is not questioning about what has already been debated till-cows-come-home. This discusses what we would do when we wish to create a class that has the characteristics of two or more other classes. Probably, most of us would do this to "inherit" from three classes. For simplicity, I left out the constructor.: class Car extends Vehicle { final public Transport transport; final public Machine machine; } So that, Car class directly inherits methods and objects of Vehicle class, but would have to refer to transport and machine explicitly to refer to objects instantiated in Transport and Machine. Car car = new Car(); car.drive(); // from Vehicle car.transport.isAmphibious(); // from Transport car.machine.getCO2Footprint(); // from Machine I thought this was a good idea until when I encounter frameworks that require setter and getter methods. For example, the XML <Car amphibious='false' footPrint='1000' model='Fordstatic999'/> would look for the methods setAmphibious(..), setFootPrint(..) and setModel(..). Therefore, I have to project the methods from Transport and Machine classes class Car extends Vehicle { final public Transport transport; final public Machine machine; public void setAmphibious(boolean b){ this.transport.setAmphibious(b); } public void setFootPrint(String fp){ this.machine.setFootPrint(fp); } } This is OK, if there were just a few characteristics. Right now, I am trying to adapt all of SmartGWT into GWT UIBinder, especially those classes that are not a GWT widget. There are lots of characteristics to project. Wouldn't it be nice if there exists some form of annotation framework that is like this: class Car extends Vehicle @projects {Transport @projects{Machine @projects Guzzler}} { /* No need to explicitly instantiate Transport, Machine or Guzzler */ .... } Where, in case of common names of characteristics exist, the characteristics of Machine would take precedence Guzzler's, and Transport's would have precedence over Machine's, and Vehicle's would have precedence over Transport's. The annotation framework would then instantiate Transport, Machine and Guzzler as hidden members of Car and expand to break-out the protected/public characteristics, in the precedence dictated by the @project annotation sequence, into actual source code or into byte-code. Preferably into byte-code. So that the setFootPrint method is found in both Machine and Guzzler, only that of Machine's would be projected. Questions: Don't you think this is a good idea to have such a framework? Does such a framework already exist? Tell me where/what. Is there an eclipse plugin that does it? Is there a proposal or plan anywhere that you know about such an annotation framework? It would be wonderful too, if the annotation/plugin framework lets me specify that boolean, int, or whatever else needs to be converted from String and does the conversion/parsing for me too. Please advise, somebody. I hope wording of my question was clear enough. Thx. Edited: To avoid OO enthusiasts jumping to conclusion, I have renamed the title of this question.

    Read the article

  • Editing a Gridview row with drop-down lists gets too wide - how can I use popup panels instead?

    - by David
    I have a series of GridViews in a Tab Panel - databound to a generic List of Business Objects. The columns in the Gridview are all similar to the following: <asp:TemplateField HeaderText="Company" SortExpression="Company.ShortName"> <ItemTemplate> <asp:Label ID="lblCompany" runat="server" Text='<%# Bind("Company.ShortName") %>'></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:DropDownList ID="ddlCompany" runat="server"></asp:DropDownList> </EditItemTemplate> </asp:TemplateField> The GridView generates the "Edit" link at the beginning of the row, all the events fire ok. The problem is that the data is getting long. When in 'display mode', it's fine because the GridView control is smart enough to break some text into multiple lines (in particular Project, Title and Worker names can get pretty long). The problem come in editing mode. Drop-down lists DON'T break entries into multiple lines (for obvious reasons). Going into Edit ode on a row in the Gridview can make the Griview expand horizontally to twice the screen size (blowing through the width limits in the Master page and CSS but that's only a related problem). What I need is something like the ModalPopup - but trying to tie it to an ID in an EditItemTemplate gives me errors when the page renders (because the 'ddlXXXX' doesn't exist at the time). In addition I don't know how to dynamically populate the panel so that I can get a response from it (like the ID of the Company they selected). I'm also trying to avoid javascript and would like this to be a 'pure' aspx/code-behind solution (for simplicity's sake among others). All the examples I find are of Modal Popups with the panels pre-defined. Even if it (the popup panel) were something like a list of checkboxes, it could be databound to the SortedList I have ready to go and an OK/Cancel button combination to accept or ignore things. I'm just not sure of what goes where. I'm open to suggestions. Thanks in advance. EDIT: Final solution looks as follows: <asp:TemplateField HeaderText="Company" SortExpression="Company.ShortName"> <ItemTemplate> <asp:Label ID="lblCompany" runat="server" Text='<%# Bind("Company.ShortName") %>'></asp:Label> </ItemTemplate> <EditItemTemplate> <asp:LinkButton ID="lnkCompany" runat="server" Text='<%# Bind("Company.ShortName") %>'></asp:LinkButton> <asp:Panel ID="pnlCompany" runat="server" style="display:none"> <div> <asp:DropDownList ID="ddlCompany" runat="server" ></asp:DropDownList> <br/> <asp:ImageButton ID="btnOKCo" runat="server" ImageUrl="~/Images/greencheck.gif" OnCommand="PopupButton_Command" CommandName="SelectCO" /> <asp:ImageButton ID="btnCxlCo" runat="server" ImageUrl="~/Images/RedX.gif" /> </div> </asp:Panel> <cc1:ModalPopupExtender ID="mpeCompany" runat="server" TargetControlID="lnkCompany" PopupControlID="pnlCompany" BackgroundCssClass="modalBackground" CancelControlID="btnCxlCo" DropShadow="true" PopupDragHandleControlID="pnlCompany" /> </EditItemTemplate> </asp:TemplateField> And in the code-behind, lstIDLabor is the generic List of data lines (of which Company is one of the properties that is also a business object) that is bound to the GridView: Sub PopupButton_Command(ByVal sender As Object, ByVal e As CommandEventArgs) Dim intRow As Integer Dim intVal As Integer RestoreFromSessionVariables() Select Case e.CommandName Case "SelectCO" intRow = grdIDCostLabor.EditIndex Dim ddlCo As DropDownList = CType(grdIDCost.Rows(intRow).FindControl("ddlCompany"), DropDownList) intVal = ddlCo.SelectedValue lstIDLabor(intRow).CompanyID = intVal lstIDLabor(intRow).Company = Company.Read(intVal) Case Else ' End Select MakeSessionVariables() BindGrids() End Sub

    Read the article

  • WCF Endpoints & Binding Configuration Issues

    - by CodeAbundance
    I am running into a very strange issue here folks. For simplicity I created a project for the sole purpose of testing the issue outside the framework of a larger application and still encountered what is either a bug in WCF within Visual Studio 2010 or something related to my WCF newbie skill set : ) Here is the issue: I have a WCF endpoint I created running inside of an MVC3 project called "SimpleMethod". The method runs inside of a .svc file on the root of the application and it returns a bool. Using the "WCF Service Configuration Editor" I have added the endpoint to my Web.Config along with a called "LargeImageBinding". Here is the service: [OperationContract] public bool SimpleMethod() { return true; } And the Web.Config generated by the Config Tool: <system.serviceModel> <bindings> <wsHttpBinding> <binding name="LargeImageBinding" closeTimeout="00:10:00" /> </wsHttpBinding> </bindings> <services> <service name="WCFEndpoints.ServiceTestOne"> <endpoint address="/ServiceTestOne.svc" binding="wsHttpBinding" bindingConfiguration="LargeImageBinding" contract="WCFEndpoints.IServiceTestOne" /> </service> </services> <behaviors> <serviceBehaviors> <behavior name=""> <serviceMetadata httpGetEnabled="true" /> <serviceDebug includeExceptionDetailInFaults="false" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" /> The service renders fine and you can see the endpoint when you navigate to: http://localhost:57364/ServiceTestOne.svc - Now the issue occurs when I create a separate project to consume the service. I add a service reference to a running instance of the above project, point it to: http://localhost:57364/ServiceTestOne.svc Here is the weird part. The service automatically generates just fine but In the Web.Config the endpoint that is generated looks like this: <client> <endpoint address="http://localhost:57364/ServiceTestOne.svc/ServiceTestOne.svc" binding="wsHttpBinding" bindingConfiguration="WSHttpBinding_IServiceTestOne" contract="ServiceTestOne.IServiceTestOne" name="WSHttpBinding_IServiceTestOne"> As you can see it lists the "ServiceTestOne.svc" portion of the address twice! When I make a call to the the service I get the following error: The remote server returned an error: (404) Not Found. I tried removing the extra "/ServiceTestOne.svc" at the end of the endpoint address in the above config, and I get the same exact error. Now what DOES work is if I go back to the WCF application and remove the custom endpoint and binding references in the Web.Config (everything in the "services" and "bindings" tags) then go back to the consumer application, update the reference to the service and make the call to SimpleMethod()....BOOM works like a charm and I get back a bool set to true. The thing is, I need to make custom binding configurations in order to allow for access to the service outside of the defaults, and from what I can tell, any attempt to create custom bindings makes the endpoints seem to run fine, but fail when an actual method call is made. Can anyone see any flaw in how I am putting this together? Thank you for your time - I have been running in circles with this for about a week!

    Read the article

  • "Warning reaching end of non-void fuction" with Multiple Sections that pull in multiple CustomCells

    - by Newbyman
    I'm getting "Reaching end of non-void function" warning, but don't have anything else to return for the compiler. How do I get around the warning?? I'm using customCells to display a table with 3 Sections. Each CustomCell is different, linked with another viewcontroller's tableview within the App, and is getting its data from its individual model. Everything works great in the Simulator and Devices, but I would like to get rid of the warning that I have. It is the only one I have, and it is pending me from uploading to App Store!! Within the - (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {, I have used 3 separate If() statements-(i.e.==0,==1,==2) to control which customCells are displayed within each section throughout the tableview's cells. Each of the customCells were created in IB, pull there data from different models, and are used with other ViewController tableViews. At the end of the function, I don't have a "cell" or anything else to return, because I already specified which CustomCell to return within each of the If() statements. Because each of the CustomCells are referenced through the AppDelegate, I can not set up an empty cell at the start of the function and just set the empty cell equal to the desired CustomCell within each of the If() statements, as you can for text, labels, etc... My question is not a matter of fixing code within the If() statements, unless it is required. My Questions is in "How to remove the warning for reaching end of non-void function-(cellForRowAtIndexPath:) when I have already returned a value for every possible case: if(section == 0); if(section == 1); and if(section == 2). *Code-Reference: The actual file names were knocked down for simplicity, (section 0 refers to M's, section 1 refers to D's, and section 2 refers to B's). Here is a sample Layout of the code: //CELL FOR ROW AT INDEX PATH: -(UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath { //Reference to the AppDelegate: MyAppDelegate *appDelegate = (MyAppDelegate *)[[UIApplication sharedApplication] delegate]; //Section 0: if(indexPath.section == 0) { static NSString *CustomMCellIdentifier = @"CustomMCellIdentifier"; MCustomCell *mCell = (MCustomCell *)[tableView dequeueReusableCellWithIdentifier:CustomMCellIdentifier]; if (mCell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"MCustomCell" owner:tableView options:nil]; for (id oneObject in nib) if ([oneObject isKindOfClass:[MCustomCell class]]) mCell = (MCustomCell *)oneObject; } //Grab the Data for this item: M *mM = [appDelegate.mms objectAtIndex:indexPath.row]; //Set the Cell [mCell setM:mM]; mCell.selectionStyle =UITableViewCellSelectionStyleNone; mCell.root = tableView; return mCell; } //Section 1: if(indexPath.section == 1) { static NSString *CustomDCellIdentifier = @"CustomDCellIdentifier"; DCustomCell *dCell = (DCustomCell *)[tableView dequeueReusableCellWithIdentifier:CustomDaddyCellIdentifier]; if (dCell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"DCustomCell" owner:tableView options:nil]; for (id oneObject in nib) if ([oneObject isKindOfClass:[DCustomCell class]]) dCell = (DCustomCell *)oneObject; } //Grab the Data for this item: D *dD = [appDelegate.dds objectAtIndex:indexPath.row]; //Set the Cell [dCell setD:dD]; //Turns the Cell's SelectionStyle Blue Highlighting off, but still permits the code to run! dCell.selectionStyle =UITableViewCellSelectionStyleNone; dCell.root = tableView; return dCell; } //Section 2: if(indexPath.section == 2) { static NSString *CustomBCellIdentifier = @"CustomBCellIdentifier"; BCustomCell *bCell = (BCustomCell *)[tableView dequeueReusableCellWithIdentifier:CustomBCellIdentifier]; if (bCell == nil) { NSArray *nib = [[NSBundle mainBundle] loadNibNamed:@"BCustomCell" owner:tableView options:nil]; for (id oneObject in nib) if ([oneObject isKindOfClass:[BCustomCell class]]) bCell = (BCustomCell *)oneObject; } //Grab the Data for this item: B *bB = [appDelegate.bbs objectAtIndex:indexPath.row]; //Set the Cell [bCell setB:bB]; bCell.selectionStyle =UITableViewCellSelectionStyleNone; bCell.root = tableView; return bCell; } //** Getting Warning "Control reaches end of non-void function" //Not sure what else to "return ???" all CustomCells were specified within the If() statements above for their corresponding IndexPath.Sections. } Any Suggestions ??

    Read the article

  • How do I detect a file write error in C?

    - by rich
    I have an embedded environment where a user might insert or remove a USB flash drive. I would like to know if the drive has been removed, or if there is some other problem when I try to write to the drive. However, Linux just saves the information in its buffers and returns with no indicated error. The computer I'm using comes with a 2.4.26 kernel and libc 2.3.2. I'm mounting the drive this way: i = mount(MEMORY_DEV_PATH, MEMORY_MNT_PATH, "vfat", MS_SYNCHRONOUS, NULL); That works: 50:/root # mount /dev/scsi/host0/bus0/target0/lun0/part1 on /mem type vfat (rw,sync) 50:/root # Later, I try to copy a file to it: int ifile, ofile; ifile = open("/tmp/tmpmidi.mid", O_RDONLY); if (ifile < 0) { perror("open in"); break; } ofile = open(current_file_name.c_str(), O_WRONLY | O_SYNC); if (ofile < 0) { perror("open out"); break; } #define BUFSZ 256 char buffer[BUFSZ]; while (1) { i = read(ifile, buffer, BUFSZ); if (i < 0) { perror("read"); break; } j = write(ofile, buffer, i); if (j < 0) { perror("write"); break; } if (i != j) { perror("Sizes wrong"); break; } if (i < BUFSZ) { printf("Copy is finished, I hope\n"); close(ifile); close(ofile); break; } } If this snippet of code is executed with a write-protected USB memory, the result is Copy is finished, I hope amid a flurry of error messages from the kernel on the console. I believe the same thing would happen if I simply removed the USB drive (without unmounting it). I have also fiddled with devfs. I figured out how to get it to automatically mount the drive, (with the REGISTER event) but it never seems to trigger the UNREGISTER when I pull out the memory. How can I determine in my program whether I have successfully created a file? Update 4 July: It was a silly oversight of me not to check the result from close(). Unfortunately, the file can be closed without error. So that didn't help. What about fsync()? That sounds like a good idea, but that didn't catch the error either. There might be some interesting information in /sys if I had such a thing. I believe that didn't get added until 2.6.?. The comment(s) about the quality of my flash drive are probably justified. It's one of the earlier ones. In fact, write protect switches seem to be extremely rare these days. I think I have to use the overkill option: Create a file, unmount & remount the drive, and check to see if the file is there. If that doesn't solve my problem, then something is really messed up! Note to myself: Make sure the file you try to create isn't already there! By the way, this does happen to be a C++ program. You can tell by the .c_str() which I had intended to edit out for simplicity.

    Read the article

  • How to generalize a method call in Java (to avoid code duplication)

    - by dln385
    I have a process that needs to call a method and return its value. However, there are several different methods that this process may need to call, depending on the situation. If I could pass the method and its arguments to the process (like in Python), then this would be no problem. However, I don't know of any way to do this in Java. Here's a concrete example. (This example uses Apache ZooKeeper, but you don't need to know anything about ZooKeeper to understand the example.) The ZooKeeper object has several methods that will fail if the network goes down. In this case, I always want to retry the method. To make this easy, I made a "BetterZooKeeper" class that inherits the ZooKeeper class, and all of its methods automatically retry on failure. This is what the code looked like: public class BetterZooKeeper extends ZooKeeper { private void waitForReconnect() { // logic } @Override public Stat exists(String path, Watcher watcher) { while (true) { try { return super.exists(path, watcher); } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } @Override public byte[] getData(String path, boolean watch, Stat stat) { while (true) { try { return super.getData(path, watch, stat); } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } @Override public void delete(String path, int version) { while (true) { try { super.delete(path, version); return; } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } } (In the actual program there is much more logic and many more methods that I took out of the example for simplicity.) We can see that I'm using the same retry logic, but the arguments, method call, and return type are all different for each of the methods. Here's what I did to eliminate the duplication of code: public class BetterZooKeeper extends ZooKeeper { private void waitForReconnect() { // logic } @Override public Stat exists(final String path, final Watcher watcher) { return new RetryableZooKeeperAction<Stat>() { @Override public Stat action() { return BetterZooKeeper.super.exists(path, watcher); } }.run(); } @Override public byte[] getData(final String path, final boolean watch, final Stat stat) { return new RetryableZooKeeperAction<byte[]>() { @Override public byte[] action() { return BetterZooKeeper.super.getData(path, watch, stat); } }.run(); } @Override public void delete(final String path, final int version) { new RetryableZooKeeperAction<Object>() { @Override public Object action() { BetterZooKeeper.super.delete(path, version); return null; } }.run(); return; } private abstract class RetryableZooKeeperAction<T> { public abstract T action(); public final T run() { while (true) { try { return action(); } catch (KeeperException e) { // We will retry. } waitForReconnect(); } } } } The RetryableZooKeeperAction is parameterized with the return type of the function. The run() method holds the retry logic, and the action() method is a placeholder for whichever ZooKeeper method needs to be run. Each of the public methods of BetterZooKeeper instantiates an anonymous inner class that is a subclass of the RetryableZooKeeperAction inner class, and it overrides the action() method. The local variables are (strangely enough) implicitly passed to the action() method, which is possible because they are final. In the end, this approach does work and it does eliminate the duplication of the retry logic. However, it has two major drawbacks: (1) it creates a new object every time a method is called, and (2) it's ugly and hardly readable. Also I had to workaround the 'delete' method which has a void return value. So, here is my question: is there a better way to do this in Java? This can't be a totally uncommon task, and other languages (like Python) make it easier by allowing methods to be passed. I suspect there might be a way to do this through reflection, but I haven't been able to wrap my head around it.

    Read the article

  • How to select table column names in a view and pass to controller in rails?

    - by zachd1_618
    So I am new to Rails, and OO programming in general. I have some grasp of the MVC architecture. My goal is to make a (nearly) completely dynamic plug-and-play plotting web server. I am fairly confused with params, forms, and select helpers. What I want to do is use Rails drop downs to basically pass parameters as strings to my controller, which will use the params to select certain column data from my database and plot it dynamically. I have the latter part of the task working, but I can't seem to pass values from my view to controller. For simplicity's sake, say my database schema looks like this: --------------Plot--------------- |____x____|____y1____|____y2____| | 1 | 1 | 1 | | 2 | 2 | 4 | | 3 | 3 | 9 | | 4 | 4 | 16 | | 5 | 5 | 25 | ... and in my Model, I have dynamic selector scopes that will let me select just certain columns of data: in Plot.rb class Plot < ActiveRecord::Base scope :select_var, lambda {|varname| select(varname)} scope :between_x, lambda {|x1,x2| where("x BETWEEN ? and ?","#{x1}","#{x2}")} So this way, I can call: irb>>@p1 = Plot.select_var(['x','y1']).between_x(1,3) and get in return a class where @p1.x and @p1.y1 are my only attributes, only for values between x=1 to x=4, which I dynamically plot. I want to start off in a view (plot/index), where I can dynamically select which variable names (table column names), and which rows from the database to fetch and plot. The problem is, most select helpers don't seem to work with columns in the database, only rows. So to select columns, I first get an array of column names that exist in my database with a function I wrote. Plots Controller def index d=Plot.first @tags = d.list_vars end So @tags = ['x','y1','y2'] Then in my plot/index.html.erb I try to use a drop down to select wich variables I send back to the controller. index.html.erb <%= select_tag( :variable, options_for_select(@plots.first.list_vars,:name,:multiple=>:true) )%> <%= button_to 'Plot now!', :controller =>"plots/plot_vars", :variable => params[:variable]%> Finally, in the controller again Plots controller ... def plot_vars @plot_data=Plot.select_vars([params[:variable]]) end The problem is everytime I try this (or one of a hundred variations thereof), the params[:variable] is nill. How can I use a drop down to pass a parameter with string variable names to the controller? Sorry its so long, I have been struggling with this for about a month now. :-( I think my biggest problem is that this setup doesn't really match the Rails architecture. I don't have "users" and "articles" as individual entities. I really have a data structure, not a data object. Trying to work with the structure in terms of data object speak is not necessarily the easiest thing to do I think. For background: My actual database has about 250 columns and a couple million rows, and they get changed and modified from time to time. I know I can make the database smarter, but its not worth it on my end. I work at a scientific institute where there are a ton of projects with databases just like this. Each one has a web developer that spends months setting up a web interface and their own janky plotting setups. I want to make this completely dynamic, as a plug-and-play solution so all you have to do is specify your database connection, and this rails setup will automatically show and plot which data you want in it. I am more of a sequential programmer and number cruncher, as are many people here. I think this project could be very helpful in the end, but its difficult to figure out for me right now.

    Read the article

  • Can I execute a "variable statements" within a function and without defines.

    - by René Nyffenegger
    I am facing a problem that I cannot see how it is solvable without #defines or incuring a performance impact although I am sure that someone can point me to a solution. I have an algorithm that sort of produces a (large) series of values. For simplicity's sake, in the following I pretend it's a for loop in a for loop, although in my code it's more complex than that. In the core of the loop I need to do calculations with the values being produced. Although the algorithm for the values stays the same, the calculations vary. So basically, what I have is: void normal() { // "Algorithm" producing numbers (x and y): for (int x=0 ; x<1000 ; x++) { for (int y=0 ; y<1000 ; y++) { // Calculation with numbers being produced: if ( x+y == 800 && y > 790) { std::cout << x << ", " << y << std::endl; } // end of calculation }} } So, the only part I need to change is if ( x+y == 800 && y > 790) { std::cout << x << ", " << y << std::endl; } So, in order to solve that, I could construct an abstract base class: class inner_0 { public: virtual void call(int x, int y) = 0; }; and derive a "callable" class from it: class inner : public inner_0 { public: virtual void call(int x, int y) { if ( x+y == 800 && y > 790) { std::cout << x << ", " << y << std::endl; } } }; I can then pass an instance of the class to the "algorithm" like so: void O(inner i) { for (int x=0 ; x<1000 ; x++) { for (int y=0 ; y<1000 ; y++) { i.call(x,y); }} } // somewhere else.... inner I; O(I); In my case, I incur a performance hit because there is an indirect call via virtual function table. So I was thinking about a way around it. It's possible with two #defines: #define OUTER \ for (int x=0 ; x<1000 ; x++) { \ for (int y=0 ; y<1000 ; y++) { \ INNER \ }} // later... #define INNER \ if (x + y == 800 && y > 790) \ std::cout << x << ", " << y << std::endl; OUTER While this certainly works, I am not 100% happy with it because I don't necessarly like #defines. So, my question: is there a better way for what I want to achieve?

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Adding Client Validation To DataAnnotations DataType Attribute

    - by srkirkland
    The System.ComponentModel.DataAnnotations namespace contains a validation attribute called DataTypeAttribute, which takes an enum specifying what data type the given property conforms to.  Here are a few quick examples: public class DataTypeEntity { [DataType(DataType.Date)] public DateTime DateTime { get; set; }   [DataType(DataType.EmailAddress)] public string EmailAddress { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This attribute comes in handy when using ASP.NET MVC, because the type you specify will determine what “template” MVC uses.  Thus, for the DateTime property if you create a partial in Views/[loc]/EditorTemplates/Date.ascx (or cshtml for razor), that view will be used to render the property when using any of the Html.EditorFor() methods. One thing that the DataType() validation attribute does not do is any actual validation.  To see this, let’s take a look at the EmailAddress property above.  It turns out that regardless of the value you provide, the entity will be considered valid: //valid new DataTypeEntity {EmailAddress = "Foo"}; .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Hmmm.  Since DataType() doesn’t validate, that leaves us with two options: (1) Create our own attributes for each datatype to validate, like [Date], or (2) add validation into the DataType attribute directly.  In this post, I will show you how to hookup client-side validation to the existing DataType() attribute for a desired type.  From there adding server-side validation would be a breeze and even writing a custom validation attribute would be simple (more on that in future posts). Validation All The Way Down Our goal will be to leave our DataTypeEntity class (from above) untouched, requiring no reference to System.Web.Mvc.  Then we will make an ASP.NET MVC project that allows us to create a new DataTypeEntity and hookup automatic client-side date validation using the suggested “out-of-the-box” jquery.validate bits that are included with ASP.NET MVC 3.  For simplicity I’m going to focus on the only DateTime field, but the concept is generally the same for any other DataType. Building a DataTypeAttribute Adapter To start we will need to build a new validation adapter that we can register using ASP.NET MVC’s DataAnnotationsModelValidatorProvider.RegisterAdapter() method.  This method takes two Type parameters; The first is the attribute we are looking to validate with and the second is an adapter that should subclass System.Web.Mvc.ModelValidator. Since we are extending DataAnnotations we can use the subclass of ModelValidator called DataAnnotationsModelValidator<>.  This takes a generic argument of type DataAnnotations.ValidationAttribute, which lucky for us means the DataTypeAttribute will fit in nicely. So starting from there and implementing the required constructor, we get: public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute> { public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute) : base(metadata, context, attribute) { } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Now you have a full-fledged validation adapter, although it doesn’t do anything yet.  There are two methods you can override to add functionality, IEnumerable<ModelValidationResult> Validate(object container) and IEnumerable<ModelClientValidationRule> GetClientValidationRules().  Adding logic to the server-side Validate() method is pretty straightforward, and for this post I’m going to focus on GetClientValidationRules(). Adding a Client Validation Rule Adding client validation is now incredibly easy because jquery.validate is very powerful and already comes with a ton of validators (including date and regular expressions for our email example).  Teamed with the new unobtrusive validation javascript support we can make short work of our ModelClientValidationDateRule: public class ModelClientValidationDateRule : ModelClientValidationRule { public ModelClientValidationDateRule(string errorMessage) { ErrorMessage = errorMessage; ValidationType = "date"; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } If your validation has additional parameters you can the ValidationParameters IDictionary<string,object> to include them.  There is a little bit of conventions magic going on here, but the distilled version is that we are defining a “date” validation type, which will be included as html5 data-* attributes (specifically data-val-date).  Then jquery.validate.unobtrusive takes this attribute and basically passes it along to jquery.validate, which knows how to handle date validation. Finishing our DataTypeAttribute Adapter Now that we have a model client validation rule, we can return it in the GetClientValidationRules() method of our DataTypeAttributeAdapter created above.  Basically I want to say if DataType.Date was provided, then return the date rule with a given error message (using ValidationAttribute.FormatErrorMessage()).  The entire adapter is below: public class DataTypeAttributeAdapter : DataAnnotationsModelValidator<DataTypeAttribute> { public DataTypeAttributeAdapter(ModelMetadata metadata, ControllerContext context, DataTypeAttribute attribute) : base(metadata, context, attribute) { }   public override System.Collections.Generic.IEnumerable<ModelClientValidationRule> GetClientValidationRules() { if (Attribute.DataType == DataType.Date) { return new[] { new ModelClientValidationDateRule(Attribute.FormatErrorMessage(Metadata.GetDisplayName())) }; }   return base.GetClientValidationRules(); } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Putting it all together Now that we have an adapter for the DataTypeAttribute, we just need to tell ASP.NET MVC to use it.  The easiest way to do this is to use the built in DataAnnotationsModelValidatorProvider by calling RegisterAdapter() in your global.asax startup method. DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter)); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Show and Tell Let’s see this in action using a clean ASP.NET MVC 3 project.  First make sure to reference the jquery, jquery.vaidate and jquery.validate.unobtrusive scripts that you will need for client validation. Next, let’s make a model class (note we are using the same built-in DataType() attribute that comes with System.ComponentModel.DataAnnotations). public class DataTypeEntity { [DataType(DataType.Date, ErrorMessage = "Please enter a valid date (ex: 2/14/2011)")] public DateTime DateTime { get; set; } } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Then we make a create page with a strongly-typed DataTypeEntity model, the form section is shown below (notice we are just using EditorForModel): @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>Fields</legend>   @Html.EditorForModel()   <p> <input type="submit" value="Create" /> </p> </fieldset> } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } The final step is to register the adapter in our global.asax file: DataAnnotationsModelValidatorProvider.RegisterAdapter(typeof(DataTypeAttribute), typeof(DataTypeAttributeAdapter)); Now we are ready to run the page: Looking at the datetime field’s html, we see that our adapter added some data-* validation attributes: <input type="text" value="1/1/0001" name="DateTime" id="DateTime" data-val-required="The DateTime field is required." data-val-date="Please enter a valid date (ex: 2/14/2011)" data-val="true" class="text-box single-line valid"> .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Here data-val-required was added automatically because DateTime is non-nullable, and data-val-date was added by our validation adapter.  Now if we try to add an invalid date: Our custom error message is displayed via client-side validation as soon as we tab out of the box.  If we didn’t include a custom validation message, the default DataTypeAttribute “The field {0} is invalid” would have been shown (of course we can change the default as well).  Note we did not specify server-side validation, but in this case we don’t have to because an invalid date will cause a server-side error during model binding. Conclusion I really like how easy it is to register new data annotations model validators, whether they are your own or, as in this post, supplements to existing validation attributes.  I’m still debating about whether adding the validation directly in the DataType attribute is the correct place to put it versus creating a dedicated “Date” validation attribute, but it’s nice to know either option is available and, as we’ve seen, simple to implement. I’m also working through the nascent stages of an open source project that will create validation attribute extensions to the existing data annotations providers using similar techniques as seen above (examples: Email, Url, EqualTo, Min, Max, CreditCard, etc).  Keep an eye on this blog and subscribe to my twitter feed (@srkirkland) if you are interested for announcements.

    Read the article

  • Using an alternate search platform in Commerce Server 2009

    - by Lewis Benge
    Although Microsoft Commerce Server 2009's architecture is built upon Microsoft SQL Server, and has the full power of the SQL Full Text Indexing Search Platform, there are time however when you may require a richer or alternate search platform. One of these scenarios if when you want to implement a faceted (refinement) search into your site, which provides dynamic refinements based on the search results dataset. Faceted search is becoming popular in most online retail environments as a way of providing an enhanced user experience when browsing a larger catalogue. This is powerful for two reasons, firstly with a traditional search it is down to a user to think of a search term suitable for the product they are trying to find. This typically will not return similar products or help in any way to refine a larger dataset. Faceted searches on the other hand provide a comprehensive list of product properties, grouped together by similarity to help the user narrow down the results returned, as the user progressively restricts the search criteria by selecting additional criteria to search again, these facets needs to continually refresh. The whole experience allows users to explore alternate brands, price-ranges, or find products they hadn't initially thought of or where looking for in a bid to enhance cross sell in the retail environment. The second advantage of this type of search from a business perspective is also to harvest the search result to start to profile your user. Even though anonymous users may routinely visit your site, and will not necessarily register or complete a transaction to build up marketing data- profiling, you can still achieve the same result by recording search facets used within the search sequence. Below is a faceted search scenario generated from eBay using the search term "server". By creating a search profile of clicking through Computer & Networking -> Servers -> Dell - > New and recording this information against my user profile you can start to predict with a lot more certainty what types of products I am interested in. This will allow you to apply shopping-cart analysis against your search data and provide great cross-sale or advertising opportunity, or personalise the user experience based on your prediction of what the user may be interested in. This type of search is extremely beneficial in e-Commerce environments but achieving it out of the box with Commerce Server and SQL Full Text indexing can be challenging. In many deployments it is often easier to use an alternate search platform such as Microsoft's FAST, Apache SOLR, or Endecca, however you still want these products to integrate natively into Commerce Server to ensure that up-to-date inventory information is presented, profile information is generated, and you provide a consistant API. To do so we make the most of the Commerce Server extensibilty points called operation sequence components. In this example I will be talking about Apache Solr hosted on Apache Tomcat, in this specific example I have used the SolrNet C# library to interface to the Java platform. Also I am not going to talk about Solr configuration of indexing – but in a production envionrment this would typically happen by using Powershell to call the Commerce Server management webservice to export your catalog as XML, apply an XSLT transform to the file to make it conform to SOLR and use a simple HTTP Post to send it to the search enginge for indexing. Essentially a sequance component is a step in a serial workflow used to call a data repository (which in most cases is usually the Commerce Server pipelines or databases) and map to and from a Commerce Entity object whilst enforcing any business rules. So the first step in the process is to add a new class library to your existing Commerce Server site. You will need to use a new library as Sequence Components will need to be strongly named to be deployed. Once you are inside of your new project, add a new class file and add a reference to the Microsoft.Commerce.Providers, Microsoft.Commerce.Contracts and the Microsoft.Commerce.Broker assemblies. Now make your new class derive from the base object Microsoft.Commerce.Providers.Components.OperationSequanceComponent and overide the ExecuteQueryMethod. Your screen will then look something similar ot this: As all we are doing on this component is conducting a search we are only interested in the ExecuteQuery method. This method accepts three arguments, queryOperation, operationCache, and response. The queryOperation will be the object in which we receive our search parameters, the cache allows access to the Commerce Server cache allowing us to store regulary accessed information, and the response object is the object which we will return the result of our search upon. Inside this method is simply where we are going to inject our logic for our third party search platform. As I am not going to explain the inner-workings of actually making a SOLR call, I'll simply provide the sample code here. I would highly recommend however looking at the SolrNet wiki as they have some great explinations of how the API works. What you will find however is that there are some further extensions required when attempting to integrate a custom search provider. Firstly you out of the box the CommerceQueryOperation you will receive into the method when conducting a search against a catalog is specifically geared towards a SQL Full Text Search with properties such as a Where clause. To make the operation you receive more relevant you will need to create another class, this time derived from Microsoft.Commerce.Contract.Messages.CommerceSearchCriteria and within this you need to detail the properties you will require to allow you to submit as parameters to the SOLR search API. My exmaple looks like this: [DataContract(Namespace = "http://schemas.microsoft.com/microsoft-multi-channel-commerce-foundation/types/2008/03")] public class CommerceCatalogSolrSearch : CommerceSearchCriteria { private Dictionary<string, string> _facetQueries;   public CommerceCatalogSolrSearch() { _facetQueries = new Dictionary<String, String>();   }     public Dictionary<String, String> FacetQueries { get { return _facetQueries; } set { _facetQueries = value; } }   public String SearchPhrase{ get; set; } public int PageIndex { get; set; } public int PageSize { get; set; } public IEnumerable<String> Facets { get; set; }   public string Sort { get; set; }   public new int FirstItemIndex { get { return (PageIndex-1)*PageSize; } }   public int LastItemIndex { get { return FirstItemIndex + PageSize; } } }  To allow you to construct a CommerceQueryOperation call within the API you will also need to construct another class to derived from Microsoft.Commerce.Common.MessageBuilders.CommerceSearchCriteriaBuilder and is simply used to construct an instance of the CommerceQueryOperation you have just created and expose the properties you want set. My Message builder looks like this: public class CommerceCatalogSolrSearchBuilder : CommerceSearchCriteriaBuilder { private CommerceCatalogSolrSearch _solrSearch;   public CommerceCatalogSolrSearchBuilder() { _solrSearch = new CommerceCatalogSolrSearch(); }   public String SearchPhrase { get { return _solrSearch.SearchPhrase; } set { _solrSearch.SearchPhrase = value; } }   public int PageIndex { get { return _solrSearch.PageIndex; } set { _solrSearch.PageIndex = value; } }   public int PageSize { get { return _solrSearch.PageSize; } set { _solrSearch.PageSize = value; } }   public Dictionary<String,String> FacetQueries { get { return _solrSearch.FacetQueries; } set { _solrSearch.FacetQueries = value; } }   public String[] Facets { get { return _solrSearch.Facets.ToArray(); } set { _solrSearch.Facets = value; } } public override CommerceSearchCriteria ToSearchCriteria() { return _solrSearch; } }  Once you have these two classes in place you can now safely cast the CommerceOperation you receive as an argument of the overidden ExecuteQuery method in the SequenceComponent to the CommerceCatalogSolrSearch operation you have just created, e.g. public CommerceCatalogSolrSearch TryGetSearchCriteria(CommerceOperation operation) { var searchCriteria = operation as CommerceQueryOperation; if (searchCriteria == null) throw new Exception("No search criteria present");   var local = (CommerceCatalogSolrSearch) searchCriteria.SearchCriteria; if (local == null) throw new Exception("Unexpected Search Criteria in Operation");   return local; }  Now you have all of your search parameters present, you can go off an call the external search platform API. You will of-course get proprietry objects returned, so the next step in the process is to convert the results being returned back into CommerceEntities. You do this via another extensibility point within the Commerce Server API called translatators. Translators are another separate class, this time derived inheriting the interface Microsoft.Commerce.Providers.Translators.IToCommerceEntityTranslator . As you can imaginge this interface is specific for the conversion of the object TO a CommerceEntity, you will need to implement a separate interface if you also need to go in the opposite direction. If you implement the required method for the interace you will get a single translate method which has a source onkect, destination CommerceEntity, and a collection of properties as arguments. For simplicity sake in this example I have hard-coded the mappings, however best practice would dictate you map the objects using your metadatadefintions.xml file . Once complete your translator would look something like the following: public class SolrEntityTranslator : IToCommerceEntityTranslator { #region IToCommerceEntityTranslator Members   public void Translate(object source, CommerceEntity destinationCommerceEntity, CommercePropertyCollection propertiesToReturn) { if (source.GetType().Equals(typeof (SearchProduct))) { var searchResult = (SearchProduct) source;   destinationCommerceEntity.Id = searchResult.ProductId; destinationCommerceEntity.SetPropertyValue("DisplayName", searchResult.Title); destinationCommerceEntity.ModelName = "Product";   } }  Once you have a translator in place you can then safely map the results of your search platform into Commerce Entities and attach them on to the CommerceResponse object in a fashion similar to this: foreach (SearchProduct result in matchingProducts) { var destinationEntity = new CommerceEntity(_returnModelName);   Translator.ToCommerceEntity(result, destinationEntity, _queryOperation.Model.Properties); response.CommerceEntities.Add(destinationEntity); }  In SOLR I actually have two objects being returned – a product, and a collection of facets so I have an additional translator for facet (which maps to a custom facet CommerceEntity) and my facet response from SOLR is passed into the Translator helper class seperatley. When all of this is pieced together you have sucessfully completed the extensiblity point coding. You would have created a new OperationSequanceComponent, a custom SearchCritiera object and message builder class, and translators to convert the objects into Commerce Entities. Now you simply need to configure them, and can start calling them in your code. Make sure you sign you assembly, compile it and identiy its signature. Next you need to put this a reference of your new assembly into the Channel.Config configuration file replacing that of the existing SQL Full Text component: You will also need to add your translators to the Translators node of your Channel.Config too: Lastly add any custom CommerceEntities you have developed to your MetaDataDefintions.xml file. Your configuration is now complete, and you should now be able to happily make a call to the Commerce Foundation API, which will act as a proxy to your third party search platform and return back CommerceEntities of your search results. If you require data to be enriched, or logged, or any other logic applied then simply add further sequence components into the OperationSequence (obviously keeping the search response first) to the node of your Channel.Config file. Now to call your code you simply request it as per any other CommerceQuery operation, but taking into account you may be receiving multiple types of CommerceEntity returned: public KeyValuePair<FacetCollection ,List<Product>> DoFacetedProductQuerySearch(string searchPhrase, string orderKey, string sortOrder, int recordIndex, int recordsPerPage, Dictionary<string, string> facetQueries, out int totalItemCount) { var products = new List<Product>(); var query = new CommerceQuery<CatalogEntity, CommerceCatalogSolrSearchBuilder>();   query.SearchCriteria.PageIndex = recordIndex; query.SearchCriteria.PageSize = recordsPerPage; query.SearchCriteria.SearchPhrase = searchPhrase; query.SearchCriteria.FacetQueries = facetQueries;     totalItemCount = 0; CommerceResponse response = SiteContext.ProcessRequest(query.ToRequest()); var queryResponse = response.OperationResponses[0] as CommerceQueryOperationResponse;   // No results. Return the empty list if (queryResponse != null && queryResponse.CommerceEntities.Count == 0) return new KeyValuePair<FacetCollection, List<Product>>();   totalItemCount = (int)queryResponse.TotalItemCount;   // Prepare a multi-operation to retrieve the product variants var multiOperation = new CommerceMultiOperation();     //Add products to results foreach (Product product in queryResponse.CommerceEntities.Where(x => x.ModelName == "Product")) { var productQuery = new CommerceQuery<Product>(Product.ModelNameDefinition); productQuery.SearchCriteria.Model.Id = product.Id; productQuery.SearchCriteria.Model.CatalogId = product.CatalogId;   var variantQuery = new CommerceQueryRelatedItem<Variant>(Product.RelationshipName.Variants);   productQuery.RelatedOperations.Add(variantQuery);   multiOperation.Add(productQuery); }   CommerceResponse variantsResponse = SiteContext.ProcessRequest(multiOperation.ToRequest()); foreach (CommerceQueryOperationResponse queryOpResponse in variantsResponse.OperationResponses) { if (queryOpResponse.CommerceEntities.Count() > 0) products.Add(queryOpResponse.CommerceEntities[0]); }   //Get facet collection FacetCollection facetCollection = queryResponse.CommerceEntities.Where(x => x.ModelName == "FacetCollection").FirstOrDefault();     return new KeyValuePair<FacetCollection, List<Product>>(facetCollection, products); }    ..And that is it – simply a few classes and some configuration will allow you to extend the Commerce Server query operations to call a third party search platform, whilst still maintaing a unifed API in the remainder of your code. This logic stands for any extensibility within CommerceServer, which requires excution in a serial fashioon such as call to LOB systems or web service to validate or enrich data. Feel free to use this example on other applications, and if you have any questions please feel free to e-mail and I'll help out where I can!

    Read the article

  • CodePlex Daily Summary for Monday, December 06, 2010

    CodePlex Daily Summary for Monday, December 06, 2010Popular ReleasesAura: Aura Preview 1: Rewritten from scratch. This release supports getting color only from icon of foreground window.myCollections: Version 1.2: New in version 1.2: Big performance improvement. New Design (Added Outlook style View, New detail view, New Groub By...) Added Sort by Media Added Manage Movie Studio Zoom preference is now saved. Media name are now editable. Added Portuguese version You can now Hide details panel Add support for FLAC tags You can now imports books from BibTex Xml file BugFixingmytrip.mvc (CMS & e-Commerce): mytrip.mvc 1.0.49.0 beta: mytrip.mvc 1.0.49.0 beta web Web for install hosting System Requirements: NET 4.0, MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) mytrip.mvc 1.0.49.0 beta src System Requirements: Visual Studio 2010 or Web Deweloper 2010 MSSQL 2008 or MySql (auto creation table to database) if .\SQLEXPRESS auto creation database (App_Data folder) Connector/Net 6.3.4, MVC3 RC WARNING For run and debug mytrip.mvc 1.0.49.0 beta src download and ...Menu and Context Menu for Silverlight 4.0: Silverlight Menu and Context Menu v2.3 Beta: - Added keyboard navigation support with access keys - Shortcuts like Ctrl-Alt-A are now supported(where the browser permits it) - The PopupMenuSeparator is now completely based on the PopupMenuItem class - Moved item manipulation code to a partial class in PopupMenuItemsControl.cs - Moved menu management and keyboard navigation code to the new PopupMenuManager class - Simplified the layout by removing the RootGrid element(all content is now placed in OverlayCanvas and is accessed by the new ...SubtitleTools: SubtitleTools 1.0: First public releaseMiniTwitter: 1.62: MiniTwitter 1.62 ???? ?? ??????????????????????????????????????? 140 ?????????????????????????? ???????????????????????????????? ?? ??????????????????????????????????Phalanger - The PHP Language Compiler for the .NET Framework: 2.0 (December 2010): The release is targetted for stable daily use. With improved performance and enhanced compatibility with several latest PHP open source applications; it makes this release perfect replacement of your old PHP runtime. Changes made within this release include following and much more: Performance improvements based on real-world applications experience. We determined biggest bottlenecks and we found and removed overheads causing performance problems in many PHP applications. Reimplemented nat...Chronos WPF: Chronos v2.0 Beta 3: Release notes: Updated introduction document. Updated Visual Studio 2010 Extension (vsix) package. Added horizontal scrolling to the main window TaskBar. Added new styles for ListView, ListViewItem, GridViewColumnHeader, ... Added a new WindowViewModel class (allowing to fetch data). Added a new Navigate method (with several overloads) to the NavigationViewModel class (protected). Reimplemented Task usage for the WorkspaceViewModel.OnDelete method. Removed the reflection effect...MDownloader: MDownloader-0.15.26.7024: Fixed updater; Fixed MegauploadDJ - jQuery WebControls for ASP.NET: DJ 1.2: What is new? Update to support jQuery 1.4.2 Update to support jQuery ui 1.8.6 Update to Visual Studio 2010 New WebControls with samples added Autocomplete WebControl Button WebControl ToggleButt WebControl The example web site is including in source code project.LateBindingApi.Excel: LateBindingApi.Excel Release 0.7g: Unterschiede zur Vorgängerversion: - Zusätzliche Interior Properties - Group / Ungroup Methoden für Range - Bugfix COM Reference Handling für Application Objekt in einigen Klassen Release+Samples V0.7g: - Enthält Laufzeit DLL und Beispielprojekte Beispielprojekte: COMAddinExample - Demonstriert ein versionslos angebundenes COMAddin Example01 - Background Colors und Borders für Cells Example02 - Font Attributes undAlignment für Cells Example03 - Numberformats Example04 - Shapes, WordArts, P...ESRI ArcGIS Silverlight Toolkit: November 2010 - v2.1: ESRI ArcGIS Silverlight Toolkit v2.1 Added Windows Phone 7 build. New controls added: InfoWindow ChildPage (Windows Phone 7 only) See what's new here full details for : http://help.arcgis.com/en/webapi/silverlight/help/#/What_s_new_in_2_1/016600000025000000/ Note: Requires Visual Studio 2010, .NET 4.0 and Silverlight 4.0.ASP .NET MVC CMS (Content Management System): Atomic CMS 2.1.1: Atomic CMS 2.1.1 release notes Atomic CMS installation guide Free Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts v3.6.5 beta Released: Hi, Today we are releasing Visifire 3.6.5 beta with the following new feature: New property AutoFitToPlotArea has been introduced in DataSeries. AutoFitToPlotArea will bring bubbles inside the PlotArea in order to avoid clipping of bubbles in bubble chart. Also this release includes few bug fixes: AxisXLabel label were getting clipped if angle was set for AxisLabels and ScrollingEnabled was not set in Chart. If LabelStyle property was set as 'Inside', size of the Pie was not proper. Yo...EnhSim: EnhSim 2.1.1: 2.1.1This release adds in the changes for 4.03a. To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 - Switched Searing Flames bac...AI: Initial 0.0.1: It’s simply just one code file; it simulates AI and machine in a simulated world. The AI has a little understanding of its body machine and parts, and able to use its feet to do actions just start and stop walking. The world is all of white with nothing but just the machine on a white planet. Colors, odors and position information make no sense. I’m previous C# programmer and I’m learning F# during this project, although I’m still not a good F# programmer, in this project I learning to prog...NKinect: NKinect Preview: Build features: Accelerometer reading Motor serial number property Realtime image update Realtime depth calculation Export to PLY (On demand) Control motor LED Control Kinect tiltMicrosoft - Domain Oriented N-Layered .NET 4.0 App Sample (Microsoft Spain): V1.0 - N-Layer DDD Sample App .NET 4.0: Required Software (Microsoft Base Software needed for Development environment) Visual Studio 2010 RTM & .NET 4.0 RTM (Final Versions) Expression Blend 4 SQL Server 2008 R2 Express/Standard/Enterprise Unity Application Block 2.0 - Published May 5th 2010 http://www.microsoft.com/downloads/en/details.aspx?FamilyID=2D24F179-E0A6-49D7-89C4-5B67D939F91B&displaylang=en http://unity.codeplex.com/releases/view/31277 PEX & MOLES 0.94.51023.0, 29/Oct/2010 - Visual Studio 2010 Power Tools http://re...Sense/Net Enterprise Portal & ECMS: SenseNet 6.0.1 Community Edition: Sense/Net 6.0.1 Community Edition This half year we have been working quite fiercely to bring you the long-awaited release of Sense/Net 6.0. Download this Community Edition to see what we have been up to. These months we have worked on getting the WebCMS capabilities of Sense/Net 6.0 up to par. New features include: New, powerful page and portlet editing experience. HTML and CSS cleanup, new, powerful site skinning system. Upgraded, lightning-fast indexing and query via Lucene. Limita...Minecraft GPS: Minecraft GPS 1.1.1: New Features Compass! New style. Set opacity on main window to allow overlay of Minecraft. Open World in any folder. Fixes Fixed style so listbox won't grow the window size. Fixed open file dialog issue on non-vista kernel machines.New ProjectsAboutTime: The AboutTime WPF controls project is aimed at developing custom controls that relate to time.aReader: aReader is a free software, it's used as an XPS document reader. It's developed in C# Language and use Windows Presentation Foundation technology with .NET Framework 3.5. Mixed with Ribbon Controls Library for GUI (Graphic User Interface) make this application user friendly.Battle Net Info: Battle Net Info provides information of the StarCraft2 player from his profile pageBencoder: Library for encode/decode bencode file or string. It's developing on C#.BiBongNet: BiBongNet Project.Binhnt: BinhntC++ Bloom Filter Library: C++ Bloom Filter LibraryChild Sponsorship Manager: Sponsorship Manager is developed for a NPO that provides child sponsorship in developing countries. It is possible to track sponsor child relations, gifts and payments. It is developed in visual basic . netDocBlogger: This is a tool for automatically converting existing XML comments from your project into MSDN style HTML for posting to the codeplex site. This will use the MetaBlog API to post code, but can be used in a copy paste fashion right away.Dynamic Rdlc WebControl: "Dynamic Rdlc WebControl" is an ASP .NET WebControl to generate dynamic reports in RDLC format without generate physical files. Suports groups and totalizers. It is developed with Microsoft Visual Studio 2010, ASP .NET and C# 4.Fake Call for Windows Phone 7: Coding4Fun Windows Phone 7 fake call applicationFlow launcher: Flow is the worlds fastest application launcher, using an onscreen keyboard and mnemonics to achieve lightning fast shortcut launching.GaDotNet: GaDotNet is an open source library designed to make it easy to log page views, events and transactions, through c# code, without using JavaScript or even needing to have a browser.HackerNews for WP7: HackerNews is a WP7 client for the HackerNews website.How much is this meeting costing us?: Coding4Fun Windows Phone 7 "How much is this meeting costing us?" applicationKLAB: KLABMap Navigator: Map Navigator - it's a silverlight application intended to work with maps.MNRT: MNRT implements (demonstrates) several techniques to realize fast global illumination for dynamic scenes on Graphics Processing Units (GPUs) using CUDA. A GPU-based kd-tree was implemented to accelerate both ray tracing and photon mapping.MVC Helpers: MVC Helper makes developing views easier. It contains extended helpers classes to render view content. It is developed in C#.Net Extended Helpers for Grid has been created so far. MVCPets: This is a projected dedicated to providing a free platform to be used by animal rescue organizations. The hope is that this project can fill the void for those rescue groups that can't afford to pay a professional web designer/developer.MyGraphicProgram: ???????????????NAI: This project is a step by step illustration of some Numerical Analysis methods.Nemono: Nemono is an application that runs in the background, and is activated by pressing a key combination like ALT+W. When activated, Nemono uses context awareness to present relevant shortcuts to the user, and mnemonics to execute shortcuts.opojo: opojoOxyPlot: OxyPlot is a .NET library for making XY line plots. The focus is on simplicity and performance. The library contains custom controls for WPF and Windows Forms. The plots can also be exported to SVG, PDF and PNG.PowerChumby: PowerChumby is a Perl CGI script and a PowerShell module that gives you a PowerShell way of controlling your Chumby.RHoK Berlin Visio Projekt: Random Hacks of Kindness - Berlin Projekt für die Senatsverwaltung für Gesundheit, Umwelt und Verbraucherschutz Query, integrate and display external data in Microsoft Visio. It's developed in C#.sc2md: starcraft.md news portalSlide Show: Coding4Fun Windows Phone 7 Slide Show applicationsmartcon: smart control centerTFS Fav source: Favourites for source location in VSTwitter Followers Monitor: Free and Open Source tool that will let you monitor any Twitter account for its new & lost followers even if it's not yours and you don't have its credentials. It allows you to add several Twitter accounts and be updated right from your desktop.

    Read the article

  • Ball to Ball Collision - Detection and Handling

    - by Simucal
    With the help of the Stack Overflow community I've written a pretty basic-but fun physics simulator. You click and drag the mouse to launch a ball. It will bounce around and eventually stop on the "floor". My next big feature I want to add in is ball to ball collision. The ball's movement is broken up into a x and y speed vector. I have gravity (small reduction of the y vector each step), I have friction (small reduction of both vectors each collision with a wall). The balls honestly move around in a surprisingly realistic way. I guess my question has two parts: What is the best method to detect ball to ball collision? Do I just have an O(n^2) loop that iterates over each ball and checks every other ball to see if it's radius overlaps? What equations do I use to handle the ball to ball collisions? Physics 101 How does it effect the two balls speed x/y vectors? What is the resulting direction the two balls head off in? How do I apply this to each ball? Handling the collision detection of the "walls" and the resulting vector changes were easy but I see more complications with ball-ball collisions. With walls I simply had to take the negative of the appropriate x or y vector and off it would go in the correct direction. With balls I don't think it is that way. Some quick clarifications: for simplicity I'm ok with a perfectly elastic collision for now, also all my balls have the same mass right now, but I might change that in the future. In case anyone is interested in playing with the simulator I have made so far, I've uploaded the source here (EDIT: Check the updated source below). Edit: Resources I have found useful 2d Ball physics with vectors: 2-Dimensional Collisions Without Trigonometry.pdf 2d Ball collision detection example: Adding Collision Detection Success! I have the ball collision detection and response working great! Relevant code: Collision Detection: for (int i = 0; i < ballCount; i++) { for (int j = i + 1; j < ballCount; j++) { if (balls[i].colliding(balls[j])) { balls[i].resolveCollision(balls[j]); } } } This will check for collisions between every ball but skip redundant checks (if you have to check if ball 1 collides with ball 2 then you don't need to check if ball 2 collides with ball 1. Also, it skips checking for collisions with itself). Then, in my ball class I have my colliding() and resolveCollision() methods: public boolean colliding(Ball ball) { float xd = position.getX() - ball.position.getX(); float yd = position.getY() - ball.position.getY(); float sumRadius = getRadius() + ball.getRadius(); float sqrRadius = sumRadius * sumRadius; float distSqr = (xd * xd) + (yd * yd); if (distSqr <= sqrRadius) { return true; } return false; } public void resolveCollision(Ball ball) { // get the mtd Vector2d delta = (position.subtract(ball.position)); float d = delta.getLength(); // minimum translation distance to push balls apart after intersecting Vector2d mtd = delta.multiply(((getRadius() + ball.getRadius())-d)/d); // resolve intersection -- // inverse mass quantities float im1 = 1 / getMass(); float im2 = 1 / ball.getMass(); // push-pull them apart based off their mass position = position.add(mtd.multiply(im1 / (im1 + im2))); ball.position = ball.position.subtract(mtd.multiply(im2 / (im1 + im2))); // impact speed Vector2d v = (this.velocity.subtract(ball.velocity)); float vn = v.dot(mtd.normalize()); // sphere intersecting but moving away from each other already if (vn > 0.0f) return; // collision impulse float i = (-(1.0f + Constants.restitution) * vn) / (im1 + im2); Vector2d impulse = mtd.multiply(i); // change in momentum this.velocity = this.velocity.add(impulse.multiply(im1)); ball.velocity = ball.velocity.subtract(impulse.multiply(im2)); } Source Code: Complete source for ball to ball collider. Binary: Compiled binary in case you just want to try bouncing some balls around. If anyone has some suggestions for how to improve this basic physics simulator let me know! One thing I have yet to add is angular momentum so the balls will roll more realistically. Any other suggestions? Leave a comment!

    Read the article

  • Yet Another ASP.NET MVC CRUD Tutorial

    - by Ricardo Peres
    I know that I have not posted much on MVC, mostly because I don’t use it on my daily life, but since I find it so interesting, and since it is gaining such popularity, I will be talking about it much more. This time, it’s about the most basic of scenarios: CRUD. Although there are several ASP.NET MVC tutorials out there that cover ordinary CRUD operations, I couldn’t find any that would explain how we can have also AJAX, optimistic concurrency control and validation, using Entity Framework Code First, so I set out to write one! I won’t go into explaining what is MVC, Code First or optimistic concurrency control, or AJAX, I assume you are all familiar with these concepts by now. Let’s consider an hypothetical use case, products. For simplicity, we only want to be able to either view a single product or edit this product. First, we need our model: 1: public class Product 2: { 3: public Product() 4: { 5: this.Details = new HashSet<OrderDetail>(); 6: } 7:  8: [Required] 9: [StringLength(50)] 10: public String Name 11: { 12: get; 13: set; 14: } 15:  16: [Key] 17: [ScaffoldColumn(false)] 18: [DatabaseGenerated(DatabaseGeneratedOption.Identity)] 19: public Int32 ProductId 20: { 21: get; 22: set; 23: } 24:  25: [Required] 26: [Range(1, 100)] 27: public Decimal Price 28: { 29: get; 30: set; 31: } 32:  33: public virtual ISet<OrderDetail> Details 34: { 35: get; 36: protected set; 37: } 38:  39: [Timestamp] 40: [ScaffoldColumn(false)] 41: public Byte[] RowVersion 42: { 43: get; 44: set; 45: } 46: } Keep in mind that this is a simple scenario. Let’s see what we have: A class Product, that maps to a product record on the database; A product has a required (RequiredAttribute) Name property which can contain up to 50 characters (StringLengthAttribute); The product’s Price must be a decimal value between 1 and 100 (RangeAttribute); It contains a set of order details, for each time that it has been ordered, which we will not talk about (Details); The record’s primary key (mapped to property ProductId) comes from a SQL Server IDENTITY column generated by the database (KeyAttribute, DatabaseGeneratedAttribute); The table uses a SQL Server ROWVERSION (previously known as TIMESTAMP) column for optimistic concurrency control mapped to property RowVersion (TimestampAttribute). Then we will need a controller for viewing product details, which will located on folder ~/Controllers under the name ProductController: 1: public class ProductController : Controller 2: { 3: [HttpGet] 4: public ViewResult Get(Int32 id = 0) 5: { 6: if (id != 0) 7: { 8: using (ProductContext ctx = new ProductContext()) 9: { 10: return (this.View("Single", ctx.Products.Find(id) ?? new Product())); 11: } 12: } 13: else 14: { 15: return (this.View("Single", new Product())); 16: } 17: } 18: } If the requested product does not exist, or one was not requested at all, one with default values will be returned. I am using a view named Single to display the product’s details, more on that later. As you can see, it delegates the loading of products to an Entity Framework context, which is defined as: 1: public class ProductContext: DbContext 2: { 3: public DbSet<Product> Products 4: { 5: get; 6: set; 7: } 8: } Like I said before, I’ll keep it simple for now, only aggregate root Product is available. The controller will use the standard routes defined by the Visual Studio ASP.NET MVC 3 template: 1: routes.MapRoute( 2: "Default", // Route name 3: "{controller}/{action}/{id}", // URL with parameters 4: new { controller = "Home", action = "Index", id = UrlParameter.Optional } // Parameter defaults 5: ); Next, we need a view for displaying the product details, let’s call it Single, and have it located under ~/Views/Product: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <!DOCTYPE html> 3:  4: <html> 5: <head runat="server"> 6: <title>Product</title> 7: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: } 6:  7: function onComplete(ctx) 8: { 9: } 10:  11: </script> 8: </head> 9: <body> 10: <div> 11: <% 1: : this.Html.ValidationSummary(false) %> 12: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 13: <% 1: : this.Html.EditorForModel() %> 14: <input type="submit" name="submit" value="Submit" /> 15: <% 1: } %> 16: </div> 17: </body> 18: </html> Yes… I am using ASPX syntax… sorry about that!   I implemented an editor template for the Product class, which must be located on the ~/Views/Shared/EditorTemplates folder as file Product.ascx: 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<Product>" %> 2: <div> 3: <%: this.Html.HiddenFor(model => model.ProductId) %> 4: <%: this.Html.HiddenFor(model => model.RowVersion) %> 5: <fieldset> 6: <legend>Product</legend> 7: <div class="editor-label"> 8: <%: this.Html.LabelFor(model => model.Name) %> 9: </div> 10: <div class="editor-field"> 11: <%: this.Html.TextBoxFor(model => model.Name) %> 12: <%: this.Html.ValidationMessageFor(model => model.Name) %> 13: </div> 14: <div class="editor-label"> 15: <%= this.Html.LabelFor(model => model.Price) %> 16: </div> 17: <div class="editor-field"> 18: <%= this.Html.TextBoxFor(model => model.Price) %> 19: <%: this.Html.ValidationMessageFor(model => model.Price) %> 20: </div> 21: </fieldset> 22: </div> One thing you’ll notice is, I am including both the ProductId and the RowVersion properties as hidden fields; they will come handy later or, so that we know what product and version we are editing. The other thing is the included JavaScript files: jQuery, jQuery UI and unobtrusive validations. Also, I am not using the Content extension method for translating relative URLs, because that way I would lose JavaScript intellisense for jQuery functions. OK, so, at this moment, I want to add support for AJAX and optimistic concurrency control. So I write a controller method like this: 1: [HttpPost] 2: [AjaxOnly] 3: [Authorize] 4: public JsonResult Edit(Product product) 5: { 6: if (this.TryValidateModel(product) == true) 7: { 8: using (BlogContext ctx = new BlogContext()) 9: { 10: Boolean success = false; 11:  12: ctx.Entry(product).State = (product.ProductId == 0) ? EntityState.Added : EntityState.Modified; 13:  14: try 15: { 16: success = (ctx.SaveChanges() == 1); 17: } 18: catch (DbUpdateConcurrencyException) 19: { 20: ctx.Entry(product).Reload(); 21: } 22:  23: return (this.Json(new { Success = success, ProductId = product.ProductId, RowVersion = Convert.ToBase64String(product.RowVersion) })); 24: } 25: } 26: else 27: { 28: return (this.Json(new { Success = false, ProductId = 0, RowVersion = String.Empty })); 29: } 30: } So, this method is only valid for HTTP POST requests (HttpPost), coming from AJAX (AjaxOnly, from MVC Futures), and from authenticated users (Authorize). It returns a JSON object, which is what you would normally use for AJAX requests, containing three properties: Success: a boolean flag; RowVersion: the current version of the ROWVERSION column as a Base-64 string; ProductId: the inserted product id, as coming from the database. If the product is new, it will be inserted into the database, and its primary key will be returned into the ProductId property. Success will be set to true; If a DbUpdateConcurrencyException occurs, it means that the value in the RowVersion property does not match the current ROWVERSION column value on the database, so the record must have been modified between the time that the page was loaded and the time we attempted to save the product. In this case, the controller just gets the new value from the database and returns it in the JSON object; Success will be false. Otherwise, it will be updated, and Success, ProductId and RowVersion will all have their values set accordingly. So let’s see how we can react to these situations on the client side. Specifically, we want to deal with these situations: The user is not logged in when the update/create request is made, perhaps the cookie expired; The optimistic concurrency check failed; All went well. So, let’s change our view: 1: <%@ Page Language="C#" Inherits="System.Web.Mvc.ViewPage<Product>" %> 2: <%@ Import Namespace="System.Web.Security" %> 3:  4: <!DOCTYPE html> 5:  6: <html> 7: <head runat="server"> 8: <title>Product</title> 9: <script src="/Scripts/jquery-1.7.2.js" type="text/javascript"></script> 1:  2: <script src="/Scripts/jquery-ui-1.8.19.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.unobtrusive-ajax.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.js" type="text/javascript"> 1: </script> 2: <script src="/Scripts/jquery.validate.unobtrusive.js" type="text/javascript"> 1: </script> 2: <script type="text/javascript"> 3: function onFailure(error) 4: { 5: window.alert('An error occurred: ' + error); 6: } 7:  8: function onSuccess(ctx) 9: { 10: if (typeof (ctx.Success) != 'undefined') 11: { 12: $('input#ProductId').val(ctx.ProductId); 13: $('input#RowVersion').val(ctx.RowVersion); 14:  15: if (ctx.Success == false) 16: { 17: window.alert('An error occurred while updating the entity: it may have been modified by third parties. Please try again.'); 18: } 19: else 20: { 21: window.alert('Saved successfully'); 22: } 23: } 24: else 25: { 26: if (window.confirm('Not logged in. Login now?') == true) 27: { 28: document.location.href = '<%: FormsAuthentication.LoginUrl %>?ReturnURL=' + document.location.pathname; 29: } 30: } 31: } 32:  33: </script> 10: </head> 11: <body> 12: <div> 13: <% 1: : this.Html.ValidationSummary(false) %> 14: <% 1: using (this.Ajax.BeginForm("Edit", "Product", new AjaxOptions{ HttpMethod = FormMethod.Post.ToString(), OnSuccess = "onSuccess", OnFailure = "onFailure" })) { %> 15: <% 1: : this.Html.EditorForModel() %> 16: <input type="submit" name="submit" value="Submit" /> 17: <% 1: } %> 18: </div> 19: </body> 20: </html> The implementation of the onSuccess function first checks if the response contains a Success property, if not, the most likely cause is the request was redirected to the login page (using Forms Authentication), because it wasn’t authenticated, so we navigate there as well, keeping the reference to the current page. It then saves the current values of the ProductId and RowVersion properties to their respective hidden fields. They will be sent on each successive post and will be used in determining if the request is for adding a new product or to updating an existing one. The only thing missing is the ability to insert a new product, after inserting/editing an existing one, which can be easily achieved using this snippet: 1: <input type="button" value="New" onclick="$('input#ProductId').val('');$('input#RowVersion').val('');"/> And that’s it.

    Read the article

< Previous Page | 25 26 27 28 29 30 31  | Next Page >