Search Results

Search found 9106 results on 365 pages for 'course'.

Page 356/365 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Processing Kinect v2 Color Streams in Parallel

    - by Chris Gardner
    Originally posted on: http://geekswithblogs.net/freestylecoding/archive/2014/08/20/processing-kinect-v2-color-streams-in-parallel.aspxProcessing Kinect v2 Color Streams in Parallel I've really been enjoying being a part of the Kinect for Windows Developer's Preview. The new hardware has some really impressive capabilities. However, with great power comes great system specs. Unfortunately, my little laptop that could is not 100% up to the task; I've had to get a little creative. The most disappointing thing I've run into is that I can't always cleanly display the color camera stream in managed code. I managed to strip the code down to what I believe is the bear minimum: using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   BitmapToDisplay.Lock(); _ColorFrame.CopyConvertedFrameDataToIntPtr( BitmapToDisplay.BackBuffer, Convert.ToUInt32( BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight ), ColorImageFormat.Bgra ); BitmapToDisplay.AddDirtyRect( new Int32Rect( 0, 0, _ColorFrame.FrameDescription.Width, _ColorFrame.FrameDescription.Height ) ); BitmapToDisplay.Unlock(); } With this snippet, I'm placing the converted Bgra32 color stream directly on the BackBuffer of the WriteableBitmap. This gives me pretty smooth playback, but I still get the occasional freeze for half a second. After a bit of profiling, I discovered there were a few problems. The first problem is the size of the buffer along with the conversion on the buffer. At this time, the raw image format of the data from the Kinect is Yuy2. This is great for direct video processing. It would be ideal if I had a WriteableVideo object in WPF. However, this is not the case. Further digging led me to the real problem. It appears that the SDK is converting the input serially. Let's think about this for a second. The color camera is a 1080p camera. As we should all know, this give us a native resolution of 1920 x 1080. This produces 2,073,600 pixels. Yuy2 uses 4 bytes per 2 pixel, for a buffer size of 4,147,200 bytes. Bgra32 uses 4 bytes per pixel, for a buffer size of 8,294,400 bytes. The SDK appears to be doing this on one thread. I started wondering if I chould do this better myself. I mean, I have 8 cores in my system. Why can't I use them all? The first problem is converting a Yuy2 frame into a Bgra32 frame. It is NOT trivial. I spent a day of research of just how to do this. In the end, I didn't even produce the best algorithm possible, but it did work. After I managed to get that to work, I knew my next step was the get the conversion operation off the UI Thread. This was a simple process of throwing the work into a Task. Of course, this meant I had to marshal the final write to the WriteableBitmap back to the UI thread. Finally, I needed to vectorize the operation so I could run it safely in parallel. This was, mercifully, not quite as hard as I thought it would be. I had my loop return an index to a pair of pixels. From there, I had to tell the loop to do everything for this pair of pixels. If you're wondering why I did it for pairs of pixels, look back above at the specification for the Yuy2 format. I won't go into full detail on why each 4 bytes contains 2 pixels of information, but rest assured that there is a reason why the format is described in that way. The first working attempt at this algorithm successfully turned my poor laptop into a space heater. I very quickly brought and maintained all 8 cores up to about 97% usage. That's when I remembered that obscure option in the Task Parallel Library where you could limit the amount of parallelism used. After a little trial and error, I discovered 4 parallel tasks was enough for most cases. This yielded the follow code: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   private void ColorFrameArrived( object sender, ColorFrameArrivedEventArgs e ) { if( null == e.FrameReference ) return;   // If you do not dispose of the frame, you never get another one... using( ColorFrame _ColorFrame = e.FrameReference.AcquireFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   Task.Factory.StartNew( () => { ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   Application.Current.Dispatcher.Invoke( () => { BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } ); } ); } } This seemed to yield a results I wanted, but there was still the occasional stutter. This lead to what I realized was the second problem. There is a race condition between the UI Thread and me locking the WriteableBitmap so I can write the next frame. Again, I'm writing approximately 8MB to the back buffer. Then, I started thinking I could cheat. The Kinect is running at 30 frames per second. The WPF UI Thread runs at 60 frames per second. This made me not feel bad about exploiting the Composition Thread. I moved the bulk of the code from the FrameArrived handler into CompositionTarget.Rendering. Once I was in there, I polled from a frame, and rendered it if it existed. Since, in theory, I'm only killing the Composition Thread every other hit, I decided I was ok with this for cases where silky smooth video performance REALLY mattered. This ode looked like this: private byte ClipToByte( int p_ValueToClip ) { return Convert.ToByte( ( p_ValueToClip < byte.MinValue ) ? byte.MinValue : ( ( p_ValueToClip > byte.MaxValue ) ? byte.MaxValue : p_ValueToClip ) ); }   void CompositionTarget_Rendering( object sender, EventArgs e ) { using( ColorFrame _ColorFrame = FrameReader.AcquireLatestFrame() ) { if( null == _ColorFrame ) return;   byte[] _InputImage = new byte[_ColorFrame.FrameDescription.LengthInPixels * _ColorFrame.FrameDescription.BytesPerPixel]; byte[] _OutputImage = new byte[BitmapToDisplay.BackBufferStride * BitmapToDisplay.PixelHeight]; _ColorFrame.CopyRawFrameDataToArray( _InputImage );   ParallelOptions _ParallelOptions = new ParallelOptions(); _ParallelOptions.MaxDegreeOfParallelism = 4;   Parallel.For( 0, Sensor.ColorFrameSource.FrameDescription.LengthInPixels / 2, _ParallelOptions, ( _Index ) => { // See http://msdn.microsoft.com/en-us/library/windows/desktop/dd206750(v=vs.85).aspx int _Y0 = _InputImage[( _Index << 2 ) + 0] - 16; int _U = _InputImage[( _Index << 2 ) + 1] - 128; int _Y1 = _InputImage[( _Index << 2 ) + 2] - 16; int _V = _InputImage[( _Index << 2 ) + 3] - 128;   byte _R = ClipToByte( ( 298 * _Y0 + 409 * _V + 128 ) >> 8 ); byte _G = ClipToByte( ( 298 * _Y0 - 100 * _U - 208 * _V + 128 ) >> 8 ); byte _B = ClipToByte( ( 298 * _Y0 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 0] = _B; _OutputImage[( _Index << 3 ) + 1] = _G; _OutputImage[( _Index << 3 ) + 2] = _R; _OutputImage[( _Index << 3 ) + 3] = 0xFF; // A   _R = ClipToByte( ( 298 * _Y1 + 409 * _V + 128 ) >> 8 ); _G = ClipToByte( ( 298 * _Y1 - 100 * _U - 208 * _V + 128 ) >> 8 ); _B = ClipToByte( ( 298 * _Y1 + 516 * _U + 128 ) >> 8 );   _OutputImage[( _Index << 3 ) + 4] = _B; _OutputImage[( _Index << 3 ) + 5] = _G; _OutputImage[( _Index << 3 ) + 6] = _R; _OutputImage[( _Index << 3 ) + 7] = 0xFF; } );   BitmapToDisplay.WritePixels( new Int32Rect( 0, 0, Sensor.ColorFrameSource.FrameDescription.Width, Sensor.ColorFrameSource.FrameDescription.Height ), _OutputImage, BitmapToDisplay.BackBufferStride, 0 ); } }

    Read the article

  • Stumbling Through: Visual Studio 2010 (Part IV)

    So finally we get to the fun part the fruits of all of our middle-tier/back end labors of generating classes to interface with an XML data source that the previous posts were about can now be presented quickly and easily to an end user.  I think.  Well see.  Well be using a WPF window to display all of our various MFL information that weve collected in the two XML files, and well provide a means of adding, updating and deleting each of these entities using as little code as possible.  Additionally, I would like to dig into the performance of this solution as well as the flexibility of it if were were to modify the underlying XML schema.  So first things first, lets create a WPF project and include our xml data in a data folder within.  On the main window, well drag out the following controls: A combo box to contain all of the teams A list box to show the players of the selected team, along with add/delete player buttons A text box tied to the selected players name, with a save button to save any changes made to the player name A combo box of all the available positions, tied to the currently selected players position A data grid tied to the statistics of the currently selected player, with add/delete statistic buttons This monstrosity of a form and its associated project will look like this (dont forget to reference the DataFoundation project from the Presentation project): To get to the visual data binding, as we learned in a previous post, you have to first make sure the project containing your bindable classes is compiled.  Do so, and then open the Data Sources pane to add a reference to the Teams and Positions classes in the DataFoundation project: Why only Team and Position?  Well, we will get to Players from Teams, and Statistics from Players so no need to make an interface for them as well see in a second.  As for Positions, well need a way to bind the dropdown to ALL positions they dont appear underneath any of the other classes so we need to reference it directly.  After adding these guys, expand every node in your Data Sources pane and see how the Team node allows you to drill into Players and then Statistics.  This is why there was no need to bring in a reference to those classes for the UI we are designing: Now for the seriously hard work of binding all of our controls to the correct data sources.  Drag the following items from the Data Sources pane to the specified control on the window design canvas: Team.Name > Teams combo box Team.Players.Name > Players list box Team.Players.Name > Player name text box Team.Players.Statistics > Statistics data grid Position.Name > Positions combo box That is it!  Really?  Well, no, not really there is one caveat here in that the Positions combo box is not bound the selected players position.  To do so, we will apply a binding to the position combo boxs SelectedValue to point to the current players PositionId value: That should do the trick now, all we need to worry about is loading the actual data.  Sadly, it appears as if we will need to drop to code in order to invoke our IO methods to load all teams and positions.  At least Visual Studio kindly created the stubs for us to do so, ultimately the code should look like this: Note the weirdness with the InitializeDataFiles call that is my current means of telling an IO where to load the data for each of the entities.  I havent thought of a more intuitive way than that yet, but do note that all data is loaded from Teams.xml besides for positions, which is loaded from Lookups.xml.   I think that may be all we need to do to at least load all of the data, lets run it and see: Yay!  All of our glorious data is being displayed!  Er, wait, whats up with the position dropdown?  Why is it red?  Lets select the RB and see if everything updates: Crap, the position didnt update to reflect the selected player, but everything else did.  Where did we go wrong in binding the position to the selected player?  Thinking about it a bit and comparing it to how traditional data binding works, I realize that we never set the value member (or some similar property) to tell the control to join the Id of the source (positions) to the position Id of the player.  I dont see a similar property to that on the combo box control, but I do see a property named SelectedValuePath that might be it, so I set it to Id and run the app again: Hey, all right!  No red box around the positions combo box.  Unfortunately, selecting the RB does not update the dropdown to point to Runningback.  Hmmm.  Now what could it be?  Maybe the problem is that we are loading teams before we are loading positions, so when it binds position Id, all of the positions arent loaded yet.  I went to the code behind and switched things so position loads first and no dice.  Same result when I run.  Why?  WHY?  Ok, ok, calm down, take a deep breath.  Get something with caffeine or sugar (preferably both) and think rationally. Ok, gigantic chocolate chip cookie and a mountain dew chaser have never let me down in the past, so dont fail me now!  Ah ha!  of course!  I didnt even have to finish the mountain dew and I think Ive got it:  Data Context.  By default, when setting on the selected value binding for the dropdown, the data context was list_team.  I dont even know what the heck list_team is, we want it to be bound to our team players view source resource instead, like this: Running it now and selecting the various players: Done and done.  Everything read and bound, thank you caffeine and sugar!  Oh, and thank you Visual Studio 2010.  Lets wire up some of those buttons now There has got to be a better way to do this, but it works for now.  What the add player button does is add a new player object to the currently selected team.  Unfortunately, I couldnt get the new object to automatically show up in the players list (something about not using an observable collection gotta look into this) so I just save the change immediately and reload the screen.  Terrible, but it works: Lets go after something easier:  The save button.  By default, as we type in new text for the players name, it is showing up in the list box as updated.  Cool!  Why couldnt my add new player logic do that?  Anyway, the save button should be as simple as invoking MFL.IO.Save for the selected player, like this: MFL.IO.Save((MFL.Player)lbTeamPlayers.SelectedItem, true); Surprisingly, that worked on the first try.  Lets see if we get as lucky with the Delete player button: MFL.IO.Delete((MFL.Player)lbTeamPlayers.SelectedItem); Refresh(); Note the use of the Refresh method again I cant seem to figure out why updates to the underlying data source are immediately reflected, but adds and deletes are not.  That is a problem for another day, and again my hunch is that I should be binding to something more complex than IEnumerable (like observable collection). Now that an example of the basic CRUD methods are wired up, I want to quickly investigate the performance of this beast.  Im going to make a special button to add 30 teams, each with 50 players and 10 seasons worth of stats.  If my math is right, that will end up with 15000 rows of data, a pretty hefty amount for an XML file.  The save of all this new data took a little over a minute, but that is acceptable because we wouldnt typically be saving batches of 15k records, and the resulting XML file size is a little over a megabyte.  Not huge, but big enough to see some read performance numbers or so I thought.  It reads this file and renders the first team in under a second.  That is unbelievable, but we are lazy loading and the file really wasnt that big.  I will increase it to 50 teams with 100 players and 20 seasons each - 100,000 rows.  It took a year and a half to save all of that data, and resulted in an 8 megabyte file.  Seriously, if you are loading XML files this large, get a freaking database!  Despite this, it STILL takes under a second to load and render the first team, which is interesting mostly because I thought that it was loading that entire 8 MB XML file behind the scenes.  I have to say that I am quite impressed with the performance of the LINQ to XML approach, particularly since I took no efforts to optimize any of this code and was fairly new to the concept from the start.  There might be some merit to this little project after all Look out SQL Server and Oracle, use XML files instead!  Next up, I am going to completely pull the rug out from under the UI and change a number of entities in our model.  How well will the code be regenerated?  How much effort will be required to tie things back together in the UI?Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • MVVM Light V4 preview (BL0014) release notes

    - by Laurent Bugnion
    I just pushed to Codeplex an update to the MVVM Light source code. This is an early preview containing some of the features that I want to release later under the version 4. If you find these features useful for your project, please download the source code and build the assemblies. I will appreciate greatly any issue report. This version is labeled “V4.0.0.0/BL0014”. The “BL” string is an old habit that we used in my days at Siemens Building Technologies, called a “base level”. Somehow I like this way of incrementing the “base level” independently of any other consideration (such as alpha, beta, CTP, RTM etc) and continue to use it to tag my software versions. In Microsoft parlance, you could say that this is an early CTP of MVVM Light V4. Caveat The code is unit tested, but as we all know this does not mean that there are no bugs This code has not yet been used in production. Again, your help in testing this is greatly appreciated, so please report all bugs to me! What’s new? The following features have been implemented: Misc Various “maintenance work”. All WPF assemblies (that is .NET35 and .NET4) now allow partially trusted callers. It means that you can use them in am XBAP in partial trust mode. Testing Various test updates Added Windows Phone 7 unit tests Note: For Windows Phone 7, due to an issue in the unit test framework, not all tests can be executed. I had to isolate those tests for the moment. The error was reported to Microsoft. ViewModelBase The constructor is now public to allow serialization (especially useful on the phone to tombstone the state). ViewModelBase.MessengerInstance now returns Messenger.Default unless it is set explicitly. Previously, MessengerInstance was returning null, which was complicating the code. Two new ways to raise the PropertyChanged event have been added. See below for details. Messenger Updated the IMessenger interface with all public members from the Messenger class. Previously some members were missing. A new Unregister method is now available, allowing to unregister a recipient for a given token. RelayCommand RaiseCanExecuteChanged now acts the same in Windows Presentation Foundation than in Silverlight. In previous versions, I was relying on the CommandManager to raise the CanExecuteChanged event in WPF. However, it was found to be too unreliable, and a more direct way of raising the event was found preferable. See below for details. Raising the PropertyChanged event A very much requested update is now included: the ability to raise the PropertyChanged event in a viewmodel without using “magic strings”. Personally, I don’t see strings as a major issue, thanks to two features of the MVVM Light Toolkit: In the DEBUG configuration, every time that the RaisePropertyChanged method is called, the name of the property is checked against all existing properties of the viewmodel. Should the property name be misspelled (because of a typo or refactoring), an exception is thrown, notifying the developer that something is wrong. To avoid impacting the performance, this check is only made in DEBUG configuration, but that should be enough to warn the developers in case they miss a rename. The property name is defined as a public constant in the “mvvminpc” code snippet. This allows checking the property name from another class (for example if the PropertyChanged event is handled in the view). It also allows changing the property name in one place only. However, these two safeguards didn’t satisfy some of the users, who requested another way to raise the PropertyChanged event. In V4, you can now do the following: Using lambdas private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(() => MyProperty); } } This raises the property changed event using a lambda expression instead of the property name. Light reflection is used to get the name. This supports Intellisense and can easily be refactored. You can also broadcast a PropertyChangedMessage using the Messenger.Default instance with: private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(() => MyProperty, oldValue, value, true); } } Using no arguments When the RaisePropertyChanged method is called within a setter, you can also omit the property name altogether. This will fail if executed outside of the setter however. Also, to avoid confusion, there is no way to broadcast the PropertyChangedMessage using this syntax. private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(); } } The old way Of course the “old” way is still supported, without broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); } } And with broadcast: public const string MyPropertyName = "MyProperty"; private int _myProperty; public int MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } var oldValue = _myProperty; _myProperty = value; RaisePropertyChanged(MyPropertyName, oldValue, value, true); } } Performance considerations It is notorious that using reflection takes more time than using a string constant to get the property name. However, after measuring for all platforms, I found the differences to be very small. I will measure more and submit the results to the community for evaluation, because some of the results are actually surprising (for example, using the Messenger to broadcast a PropertyChangedMessage does not significantly increase the time taken to raise the PropertyChanged event and update the bindings). For now, I submit this code to you, and would be delighted to hear about your own results. Raising the CanExecuteChanged event manually In WPF, until now, the CanExecuteChanged event for a RelayCommand was raised automatically. Or rather, it was attempted to be raised, using a feature that is only available in WPF called the CommandManager. This class monitors the UI and when something occurs, it queries the state of the CanExecute delegate for all the commands. However, this proved unreliable for the purpose of MVVM: Since very often the value of the CanExecute delegate changes according to non-UI events (for example something changing in the viewmodel or in the model), raising the CanExecuteChanged event manually is necessary. In Silverlight, the CommandManager does not exist, so we had to raise the event manually from the start. This proved more reliable, and I now changed the WPF implementation of the RaiseCanExecuteChanged method to be the exact same in WPF than in Silverlight. For instance, if a command must be enabled when a string property is set to a value other than null or empty string, you can do: public MainViewModel() { MyTestCommand = new RelayCommand( () => DoSomething(), () => !string.IsNullOrEmpty(MyProperty)); } public const string MyPropertyName = "MyProperty"; private string _myProperty = string.Empty; public string MyProperty { get { return _myProperty; } set { if (_myProperty == value) { return; } _myProperty = value; RaisePropertyChanged(MyPropertyName); MyTestCommand.RaiseCanExecuteChanged(); } } Logo update I made a minor change to the logo: Some people found the lack of the word “light” (as in MVVM Light Toolkit) confusing. I thought it was cool, because the feather suggests the idea of lightness, however I can see the point. So I added the word “light” to the logo. Things should be quite clear now. What’s next? This is only the first of a series of releases that will bring MVVM Light to V4. In the next weeks, I will continue to add some very requested features and correct some issues in the code. I will probably continue this fashion of releasing the changes to the public as source code through Codeplex. I would be very interested to hear what you think of that, and to get feedback about the changes. Cheers, Laurent   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • [OpenGL ES - Android] Better way to generate tiles

    - by Inoe
    Hi ! I'll start by saying that i'm REALLY new to OpenGL ES (I started yesterday =), but I do have some Java and other languages experience. I've looked a lot of tutorials, of course Nehe's ones and my work is mainly based on that. As a test, I started creating a "tile generator" in order to create a small Zelda-like game (just moving a dude in a textured square would be awsome :p). So far, I have achieved a working tile generator, I define a char map[][] array to store wich tile is on : private char[][] map = { {0, 0, 20, 11, 11, 11, 11, 4, 0, 0}, {0, 20, 16, 12, 12, 12, 12, 7, 4, 0}, {20, 16, 17, 13, 13, 13, 13, 9, 7, 4}, {21, 24, 18, 14, 14, 14, 14, 8, 5, 1}, {21, 22, 25, 15, 15, 15, 15, 6, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {21, 22, 23, 0, 0, 0, 0, 3, 2, 1}, {26, 0, 0, 0, 0, 0, 0, 3, 2, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1}, {0, 0, 0, 0, 0, 0, 0, 0, 0, 1} }; It's working but I'm no happy with it, I'm sure there is a beter way to do those things : 1) Loading Textures : I create an ugly looking array containing the tiles I want to use on that map : private int[] textures = { R.drawable.herbe, //0 R.drawable.murdroite_haut, //1 R.drawable.murdroite_milieu, //2 R.drawable.murdroite_bas, //3 R.drawable.angledroitehaut_haut, //4 R.drawable.angledroitehaut_milieu, //5 }; (I cutted this on purpose, I currently load 27 tiles) All of theses are stored in the drawable folder, each one is a 16*16 tile. I then use this array to generate the textures and store them in a HashMap for a later use : int[] tmp_tex = new int[textures.length]; gl.glGenTextures(textures.length, tmp_tex, 0); texturesgen = tmp_tex; //Store the generated names in texturesgen for(int i=0; i < textures.length; i++) { //Bitmap bmp = BitmapFactory.decodeResource(context.getResources(), textures[i]); InputStream is = context.getResources().openRawResource(textures[i]); Bitmap bitmap = null; try { //BitmapFactory is an Android graphics utility for images bitmap = BitmapFactory.decodeStream(is); } finally { //Always clear and close try { is.close(); is = null; } catch (IOException e) { } } // Get a new texture name // Load it up this.textureMap.put(new Integer(textures[i]),new Integer(i)); int tex = tmp_tex[i]; gl.glBindTexture(GL10.GL_TEXTURE_2D, tex); //Create Nearest Filtered Texture gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MIN_FILTER, GL10.GL_NEAREST); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_MAG_FILTER, GL10.GL_LINEAR); //Different possible texture parameters, e.g. GL10.GL_CLAMP_TO_EDGE gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_S, GL10.GL_REPEAT); gl.glTexParameterf(GL10.GL_TEXTURE_2D, GL10.GL_TEXTURE_WRAP_T, GL10.GL_REPEAT); //Use the Android GLUtils to specify a two-dimensional texture image from our bitmap GLUtils.texImage2D(GL10.GL_TEXTURE_2D, 0, bitmap, 0); bitmap.recycle(); } I'm quite sure there is a better way to handle that... I just was unable to figure it. If someone has an idea, i'm all ears. 2) Drawing the tiles What I did was create a single square and a single texture map : /** The initial vertex definition */ private float vertices[] = { -1.0f, -1.0f, 0.0f, //Bottom Left 1.0f, -1.0f, 0.0f, //Bottom Right -1.0f, 1.0f, 0.0f, //Top Left 1.0f, 1.0f, 0.0f //Top Right }; private float texture[] = { //Mapping coordinates for the vertices 0.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 1.0f, 0.0f }; Then, in my draw function, I loop through the map to define the texture to use (after pointing to and enabling the buffers) : for(int y = 0; y < Y; y++){ for(int x = 0; x < X; x++){ tile = map[y][x]; try { //Get the texture from the HashMap int textureid = ((Integer) this.textureMap.get(new Integer(textures[tile]))).intValue(); gl.glBindTexture(GL10.GL_TEXTURE_2D, this.texturesgen[textureid]); } catch(Exception e) { return; } //Draw the vertices as triangle strip gl.glDrawArrays(GL10.GL_TRIANGLE_STRIP, 0, vertices.length / 3); gl.glTranslatef(2.0f, 0.0f, 0.0f); //A square takes 2x so I move +2x before drawing the next tile } gl.glTranslatef(-(float)(2*X), -2.0f, 0.0f); //Go back to the begining of the map X-wise and move 2y down before drawing the next line } This works great by I really think that on a 1000*1000 or more map, it will be lagging as hell (as a reminder, this is a typical Zelda world map : http://vgmaps.com/Atlas/SuperNES/LegendOfZelda-ALinkToThePast-LightWorld.png ). I've read things about Vertex Buffer Object and DisplayList but I couldn't find a good tutorial and nodoby seems to be OK on wich one is the best / has the better support (T1 and Nexus One are ages away). I think that's it, I've putted a lot of code but I think it helps. Thanks in advance !

    Read the article

  • Scrum Master Stephen Forte Teaches Agile Development, Silverlight and BI at GIDS 2010

    - by rajesh ahuja
    Great Indian Developer Summit 2010 – Gold Standard for India's Software Developer Ecosystem Bangalore, March 25, 2010: The author of several books on application and database development including Programming SQL Server 2008 and certified Scrum Master Stephen Forte is coming this summer to India's biggest summit for the developer ecosystem - Great Indian Developer Summit. At the summit, Stephen will conduct a workshop guaranteed to give attendees a jump start in taking a certified scrum master exam. Scrum, one of the most popular Agile project management and development methods, which is starting to be adopted at major corporations and on very large projects. After an introduction to the basics of Scrum like project planning and estimation, the Scrum Master, team, product owner and burn down, and of course the daily Scrum, Stephen will show many real world applications of the methodology drawn from his own experience as a Scrum Master. Negotiating with the business, estimation and team dynamics are all discussed as well as how to use Scrum in small organizations, large enterprise environments and consulting environments. Stephen will also discuss using Scrum with virtual teams and an off-shoring environment. He will then take a look at the tools we will use for Agile development, including planning poker, unit testing, and much more. On 20th April at the GIDS.NET Conference, Stephen will also conduct a series of sessions on Microsoft computing technologies. He will teach how to build data driven, n-tier Rich Internet Applications (RIA) with Silverlight 4.0. Line of business applications (LOB) in Silverlight 4.0 are easy by tapping the power of WCF RIA Services, the Silverlight Toolkit, and elevated out of browser support. Stephen's demo centric session will walk you through an example of building a LOB application with Silverlight 4.0. See how Silverlight and WCF RIA Services support domain logic, services, data binding, validation, server based paging, authentication, authorization and much more. Silverlight 4.0 means business. Silverlight runs C# and Visual Basic code, and so it seems natural that a business application might share some code between the Silverlight client and its ASP.NET Web server. You may want to run some code client-side for interactivity, but re-run that code on the server for security or reliability. This is possible, and there are several techniques you can use to accomplish this goal. In Stephen's second talk learn about the various techniques and their pros and cons. Some techniques work better in C#, others in VB. Still others are simpler with a little extra tooling or code-generation. Any serious Silverlight business application will almost certainly face this issue, and this session gets you going fast. In the third talk, Stephen will explain how to properly architect and deploy a BI application using a mix of some exciting new tools and some old familiar ones. He will start with a traditional relational transaction centric database (OLTP) and explore ways to build a data warehouse (OLAP), looking at the star and snowflake schemas. Next he will look at the process of extraction, transformation, and loading (ETL) your OLTP data into your data warehouse. Different techniques for ETL will be described and the various tradeoffs will be discussed. Then he will look at using the warehouse for reporting, drill down, and data analysis in Microsoft Excel's PowerPivot 2010. The session will round off by showing how to properly build a cube and build a data analysis application on top of that cube, and conclude by looking at some tools to help with the data visualization process. Every year, GIDS is a game changer for several thousands of IT professionals, providing them with a competitive edge over their peers, enlightening them with bleeding-edge information most useful in their daily jobs, helping them network with world-class experts and visionaries, and providing them with a much needed thrust in their careers. Attend Great Indian Developer Summit to gain the information, education and solutions you seek. From post-conference workshops, breakout sessions by expert instructors, keynotes by industry heavyweights, enhanced networking opportunities, and more. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • How to export SSIS to Microsoft Excel without additional software?

    - by Dr. Zim
    This question is long winded because I have been updating the question over a very long time trying to get SSIS to properly export Excel data. I managed to solve this issue, although not correctly. Aside from someone providing a correct answer, the solution listed in this question is not terrible. The only answer I found was to create a single row named range wide enough for my columns. In the named range put sample data and hide it. SSIS appends the data and reads metadata from the single row (that is close enough for it to drop stuff in it). The data takes the format of the hidden single row. This allows headers, etc. WOW what a pain in the butt. It will take over 450 days of exports to recover the time lost. However, I still love SSIS and will continue to use it because it is still way better than Filemaker LOL. My next attempt will be doing the same thing in the report server. Original question notes: If you are in Sql Server Integrations Services designer and want to export data to an Excel file starting on something other than the first line, lets say the forth line, how do you specify this? I tried going in to the Excel Destination of the Data Flow, changed the AccessMode to OpenRowSet from Variable, then set the variable to "YPlatters$A4:I20000" This fails saying it cannot find the sheet. The sheet is called YPlatters. I thought you could specify (Sheet$)(Starting Cell):(Ending Cell)? Update Apparently in Excel you can select a set of cells and name them with the name box. This allows you to select the name instead of the sheet without the $ dollar sign. Oddly enough, whatever the range you specify, it appends the data to the next row after the range. Oddly, as you add data, it increases the named selection's row count. Another odd thing is the data takes the format of the last line of the range specified. My header rows are bold. If I specify a range that ends with the header row, the data appends to the row below, and makes all the entries bold. if you specify one row lower, it puts a blank line between the header row and the data, but the data is not bold. Another update No matter what I try, SSIS samples the "first row" of the file and sets the metadata according to what it finds. However, if you have sample data that has a value of zero but is formatted as the first row, it treats that column as text and inserts numeric values with a single quote in front ('123.34). I also tried headers that do not reflect the data types of the columns. I tried changing the metadata of the Excel destination, but it always changes it back when I run the project, then fails saying it will truncate data. If I tell it to ignore errors, it imports everything except that column. Several days of several hours a piece later... Another update I tried every combination. A mostly working example is to create the named range starting with the column headers. Format your column headers as you want the data to look as the data takes on this format. In my example, these exist from A4 to E4, which is my defined range. SSIS appends to the row after the defined range, so defining A4 to E68 appends the rows starting at A69. You define the Connection as having the first row contains the field names. It takes on the metadata of the header row, oddly, not the second row, and it guesses at the data type, not the formatted data type of the column, i.e., headers are text, so all my metadata is text. If your headers are bold, so is all of your data. I even tried making a sample data row without success... I don't think anyone actually uses Excel with the default MS SSIS export. If you could define the "insert range" (A5 to E5) with no header row and format those columns (currency, not bold, etc.) without it skipping a row in Excel, this would be very helpful. From what I gather, noone uses SSIS to export Excel without a third party connection manager. Any ideas on how to set this up properly so that data is formatted correctly, i.e., the metadata read from Excel is proper to the real data, and formatting inherits from the first row of data, not the headers in Excel? One last update (July 17, 2009) I got this to work very well. One thing I added to Excel was the IMEX=1 in the Excel connection string: "Excel 8.0;HDR=Yes;IMEX=1". This forces Excel (I think) to look at all rows to see what kind of data is in it. Generally, this does not drop information, say for instance if you have a zip code then about 9 rows down you have a zip+4, Excel without this blanks that field entirely without error. With IMEX=1, it recognizes that Zip is actually a character field instead of numeric. And of course, one more update (August 27, 2009) The IMEX=1 will succeed importing data with missing contents in the first 8 rows, but it will fail exporting data where no data exists. So, have it on your import connection string, but not your export Excel connection string. I have to say, after so much fiddling, it works pretty well.

    Read the article

  • ASP.NET MVC Checkbox Group

    - by Greg Ogle
    I am trying to formulate a work-around for the lack of a "checkbox group" in ASP.NET MVC. The typical way to implement this is to have check boxes of the same name, each with the value it represents. <input type="checkbox" name="n" value=1 /> <input type="checkbox" name="n" value=2 /> <input type="checkbox" name="n" value=3 /> When submitted, it will comma delimit all values to the request item "n".. so Request["n"] == "1,2,3" if all three are checked when submitted. In ASP.NET MVC, you can have a parameter of n as an array to accept this post. public ActionResult ActionName( int[] n ) { ... } All of the above works fine. The problem I have is that when validation fails, the check boxes are not restored to their checked state. Any suggestions. Problem Code: (I started with the default asp.net mvc project) Controller public class HomeController : Controller { public ActionResult Index() { var t = getTestModel("First"); return View(t); } [AcceptVerbs(HttpVerbs.Post)] public ActionResult Index(TestModelView t) { if(String.IsNullOrEmpty( t.TextBoxValue)) ModelState.AddModelError("TextBoxValue", "TextBoxValue required."); var newView = getTestModel("Next"); return View(newView); } private TestModelView getTestModel(string prefix) { var t = new TestModelView(); t.Checkboxes = new List<CheckboxInfo>() { new CheckboxInfo(){Text = prefix + "1", Value="1", IsChecked=false}, new CheckboxInfo(){Text = prefix + "2", Value="2", IsChecked=false} }; return t; } } public class TestModelView { public string TextBoxValue { get; set; } public List<CheckboxInfo> Checkboxes { get; set; } } public class CheckboxInfo { public string Text { get; set; } public string Value { get; set; } public bool IsChecked { get; set; } } } ASPX <% using( Html.BeginForm() ){ %> <p><%= Html.ValidationSummary() %></p> <p><%= Html.TextBox("TextBoxValue")%></p> <p><% int i = 0; foreach (var cb in Model.Checkboxes) { %> <input type="checkbox" name="Checkboxes[<%=i%>]" value="<%= Html.Encode(cb.Value) %>" <%=cb.IsChecked ? "checked=\"checked\"" : String.Empty %> /><%= Html.Encode(cb.Text)%><br /> <% i++; } %></p> <p><input type="submit" value="submit" /></p> <% } %> Working Code Controller [AcceptVerbs(HttpVerbs.Post)] public ActionResult Index(TestModelView t) { if(String.IsNullOrEmpty( t.TextBoxValue)) { ModelState.AddModelError("TextBoxValue", "TextBoxValue required."); return View(t); } var newView = getTestModel("Next"); return View(newView); } ASPX int i = 0; foreach (var cb in Model.Checkboxes) { %> <input type="checkbox" name="Checkboxes[<%=i%>].IsChecked" <%=cb.IsChecked ? "checked=\"checked\"" : String.Empty %> value="true" /> <input type="hidden" name="Checkboxes[<%=i%>].IsChecked" value="false" /> <input type="hidden" name="Checkboxes[<%=i%>].Value" value="<%= cb.Value %>" /> <input type="hidden" name="Checkboxes[<%=i%>].Text" value="<%= cb.Text %>" /> <%= Html.Encode(cb.Text)%><br /> <% i++; } %></p> <p><input type="submit" value="submit" /></p> Of course something similar could be done with Html Helpers, but this works.

    Read the article

  • How to configure hibernate-tools with maven to generate hibernate.cfg.xml, *.hbm.xml, POJOs and DAOs

    - by mmm
    Hi, can any one tell me how to force maven to precede mapping .hbm.xml files in the automatically generated hibernate.cfg.xml file with package path? My general idea is, I'd like to use hibernate-tools via maven to generate the persistence layer for my application. So, I need the hibernate.cfg.xml, then all my_table_names.hbm.xml and at the end the POJO's generated. Yet, the hbm2java goal won't work as I put *.hbm.xml files into the src/main/resources/package/path/ folder but hbm2cfgxml specifies the mapping files only by table name, i.e.: <mapping resource="MyTableName.hbm.xml" /> So the big question is: how can I configure hbm2cfgxml so that hibernate.cfg.xml looks like below: <mapping resource="package/path/MyTableName.hbm.xml" /> My pom.xml looks like this at the moment: <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>hibernate3-maven-plugin</artifactId> <version>2.2</version> <executions> <execution> <id>hbm2cfgxml</id> <phase>generate-sources</phase> <goals> <goal>hbm2cfgxml</goal> </goals> <inherited>false</inherited> <configuration> <components> <component> <name>hbm2cfgxml</name> <implemetation>jdbcconfiguration</implementation> <outputDirectory>src/main/resources/</outputDirectory> </component> </components> <componentProperties> <packagename>package.path</packageName> <configurationFile>src/main/resources/hibernate.cfg.xml</configurationFile> </componentProperties> </configuration> </execution> </executions> </plugin> And then the second question: is there a way to tell maven to copy resources to the target folder before executing hbm2java? At the moment I'm using mvn clean resources:resources generate-sources for that, but there must be a better way. Thanks for any help. Update: @Pascal: Thank you for your help. The path to mappings works fine now, I don't know what was wrong before, though. Maybe there is some issue with writing to hibernate.cfg.xml while reading database config from it (though the file gets updated). I've deleted the file hibernate.cfg.xml, replaced it with database.properties and run the goals hbm2cfgxml and hbm2hbmxml. I also don't use the outputDirectory nor configurationfile in those goals anymore. As a result the files hibernate.cfg.xml and all *.hbm.xml are being generated into my target/hibernate3/generated-mappings/ folder, which is the default value. Then I updated the hbm2java goal with the following: <componentProperties> <packagename>package.name</packagename> <configurationfile>target/hibernate3/generated-mappings/hibernate.cfg.xml</configurationfile> </componentProperties> But then I get the following: [INFO] --- hibernate3-maven-plugin:2.2:hbm2java (hbm2java) @ project.persistence --- [INFO] using configuration task. [INFO] Configuration XML file loaded: file:/C:/Documents%20and%20Settings/mmm/workspace/project.persistence/target/hibernate3/generated-mappings/hibernate.cfg.xml 12:15:17,484 INFO org.hibernate.cfg.Configuration - configuring from url: file:/C:/Documents%20and%20Settings/mmm/workspace/project.persistence/target/hibernate3/generated-mappings/hibernate.cfg.xml 12:15:19,046 INFO org.hibernate.cfg.Configuration - Reading mappings from resource : package.name/Messages.hbm.xml [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.codehaus.mojo:hibernate3-maven-plugin:2.2:hbm2java (hbm2java) on project project.persistence: Execution hbm2java of goal org.codehaus.mojo:hibernate3-maven-plugin:2.2:hbm2java failed: resource: package/name/Messages.hbm.xml not found How do I deal with that? Of course I could add: <outputDirectory>src/main/resources/package/name</outputDirectory> to the hbm2hbmxml goal, but I think this is not the best approach, or is it? Is there a way to keep all the generated code and resources away from the src/ folder? I assume, the goal of this approach is not to generate any sources into my src/main/java or /resources folder, but to keep the generated code in the target folder. As I generally agree with this point of view, I'd like to continue with that eventually executing hbm2dao and packaging the project to be used as a generated persistence layer component from the business layer. Is this also what you meant?

    Read the article

  • Ajax, Lizard Brain Web Design, JSF, Struts, JavaScript, Mobile Web, Flash, jQuery, GWT, Harmony at I

    - by Kim Won
    Great Indian Developer Summit 2010 – India's Biggest Polyglot Conference and Workshops for IT Software Professionals Bangalore, April 9, 2010: The GIDS.Web Conference and Workshops has announced the complete program of over 30 sessions on how browser and rich web technologies such as AJAX, DHTML, Mashups, Web 2.0, Enterprise 2.0 technologies, and Rich UI technologies are making money and gaining market-share for some of the leading businesses in the world. The GIDS.Web track at Great Indian Developer Summit takes place 21 and 23 April 2010, at the Indian Institute of Science in Bangalore. As one of the longest running independent developer conferences in India, GIDS.Web at the Great Indian Developer Summit 2010 is uniquely positioned to provide a blend of practical, pragmatic and immediately applicable knowledge and a glimpse of the future of technology. During 21 and 23 April 2010, GIDS.Web offers a multi-track conference, workshops, expo show floor, and networking opportunities. The first keynote at GIDS.Web is led by the leading Java EE and Ajax developer, speaker, and author Marty Hall. The best of India's Java and RIA programmers have learnt the subject from Marty's seminal books Core Servlets and JavaServer Pages (first and second editions), More Servlets and JavaServer Pages, and Core Web Programming (first and second editions) from Prentice Hall and Sun Microsystems Press. Marty's keynote address is a comparison of approaches to building rich Internet applications with Ajax. Marty says Ajax development is difficult, and there are several fundamentally different strategies to building Ajaxified Web applications. The keynote address will survey the three most important of these approaches: using an Ajax-enabled JavaScript library such as jQuery, Prototype, Scriptaculous, Dojo, or Ext/JS; using a Web framework such as JSF 2.0 or Struts 2 that has integrated Ajax support; using the Google Web Toolkit (GWT) to build "pure Java" Ajax applications. The talk will compare and contrast these three approaches, discussing the types of applications that fit best for each option. Over the course of the summit Marty will conduct several more sessions on "Choosing an Ajax/JavaScript Toolkit: A Comparison of the Most Popular JavaScript Libraries", "Pure Java Ajax: An Overview of GWT 2.0", "Integrated Ajax Support in JSF 2.0" and "Ajax Support in the Prototype JavaScript Library". The second keynote by the head of Adobe's Flash initiative in India, Ramesh Srinivasaraghavan, explores the state of art in web application development and identify trends that could transform the way we create and use web applications. The talk explains how the Adobe Flash Platform has fuelled this revolution with an integrated set of technologies for delivering the most compelling applications, content and video to the widest possible audience. The Director of Forum Nokia will explain how cloud computing coupled with mobile applications enable consumers to have access to powerful services and improved user experiences never before thought possible. IEEE's 2010 President-Elect Sorel Reisman's afternoon address steps to improve the IT profession in India. Featured talks at GID.Web also include: Web 2.0 Checklist - Deconstructing Modern Websites, Scott Davis Choosing an Ajax/JavaScript Toolkit: Comparison of Popular JavaScript Libraries, Marty Hall Lizard Brain Web Design, Scott Davis Effective Design Processes and Resources for Mobile Web Development, Arabella David NoSQL: The Shift to a Non-relational World, Nosh Petigara Open Source Web Debugging Tools, Matthew McCullough Building Line of Business Applications with Silverlight 4.0, Stephen Forte Hadoop - Divide and Conquer, Matthew McCullough Adobe Flash Catalyst for Agile Interaction Design, Harish Sivaramakrishnan Using jQuery and AJAX to Build Front-ends for ASP.NET and ASP.NET MVC, Pandurang Nayak First Steps to IT Heaven Through the Cloud. Part II: .WEB, Simone Brunozzi Building Rich Internet Applications with SL RIA Web Services, Pandurang Nayak Enriching Cloud Applications with Adobe Flash Platform, Ramesh Srinivasaraghavan Payments for the Web.future, Khurram Khan and Praveen Alavilli Longevity of Scalable Systems, Nishad Kamat Transform yourself into a Mobile App Developer Using Web Run Time, Balagopal K S Developing Multi Screen Applications on Adobe Flash Platform, Hemanth Sharma Why Harmony and For Whom?, Himanshu Goyal IIS Hosting Solution for ASP.net and PHP Web Sites, Nahas Mohammed Building Pluggable Web applications using Django, Lakshman Prasad Workshop: The 180-min AJAX and JSON Spike Class, Scott Davis Workshop: Essence of Functional Programming, Venkat Subramaniam Workshop: Agile Development, Tools, and Teams and Scrum Certification, Stephen Forte Workshop: PHP + Adobe Flex = Killer RIA, Shyamprasad P Workshop: Cloud Computing Boot Camp on the Google App Engine, Matthew McCullough Workshop: Building Data Centric Applications using Adobe Flex and Java, Prashant Singh Workshop: Building Your First Amazon App, Simone Brunozzi Workshop: Windows Azure Deep Dive, Ramaprasanna Chellamuthu Workshop: Monetizing your Apps with PayPal X Payments Platform, Khurram Khan, Praveen Alavilli Workshop: User Expereince Evaluation Model Walkthrough, Sanna Häiväläinen Sponsors of Great Indian Developer Summit 2010 include: Platinum sponsors Microsoft, Oracle Forum Nokia and Adobe; Gold sponsors Intel and SAP; Silver sponsors Quest Software, PayPal, Telerik and AMT. About Great Indian Developer Summit Great Indian Developer Summit is the gold standard for India's software developer ecosystem for gaining exposure to and evaluating new projects, tools, services, platforms, languages, software and standards. Packed with premium knowledge, action plans and advise from been-there-done-it veterans, creators, and visionaries, the 2010 edition of Great Indian Developer Summit features focused sessions, case studies, workshops and power panels that will transform you into a force to reckon with. Featuring 3 co-located conferences: GIDS.NET, GIDS.Web, GIDS.Java and an exclusive day of in-depth tutorials - GIDS.Workshops, from 20 April to 24 April at the IISc campus in Bangalore. At GIDS you'll participate in hundreds of sessions encompassing the full range of Microsoft computing, Java, Agile, RIA, Rich Web, open source/standards, languages, frameworks and platforms, practical tutorials that deep dive into technical skill and best practices, inspirational keynote presentations, an Expo Hall featuring dozens of the latest projects and products activities, engaging networking events, and the interact with the best and brightest of speakers from around the world. For further information on GIDS 2010, please visit the summit on the web http://www.developersummit.com/ A Saltmarch Media Press Release E: [email protected] Ph: +91 80 4005 1000

    Read the article

  • ASP.NET MVC & Silverlight development - on Ubuntu

    - by queen3
    I recently moved my working environment from Windows 7 to Ubuntu, and enjoy this every minute of my working day. My work is currently to develop ASP.NET MVC and Silverlight applications. Thus, important Windows stuff is being still run in VirtualBox (such as IIS, MSSQL, Silverlight 3, and legacy COM stuff). For now, I use Visual Studio under VirtualBox as editor/IDE/debugger. But since I prefer Ubuntu fonts and UI much more I'd like to move at least editor (and better IDE) to native Ubuntu. Things I have already done: I store project files in my home folder and run Visual Studio from \vboxsvr share. With few tricks for ASP.NET it works. I use svn on Ubuntu. I test my ASP.NET MVC site using FireFox/FireBug on Ubuntu. What I need on Ubuntu: SQL client to manage MSSQL. I mostly need querying, but I would miss SQL intellisense support. I enjoy command-line svn a lot, but there're times when it's not enough (e.g. view files / check diffs / selectively commit at the sames time) so I wonder if there're any addons - I don't mean replacement for svn, just addons for rare cases like above. I wonder if there're editors that can provide some C# intellisense. Yes I know about MonoDevelop, but will it provide intellisense without compiling (since I'm going to compile remotely in Win box)? And pretty big topic, what's the best way (editor/IDE) to do "folder-based" development? What I mean: The project is /trunk. Everything is there. I don't want to manually add files to project or like that. The project is the folder and files down there. Main task is to edit files of course. I need a quick way to open / search for files in the project. Like in Resharper, I can click Ctrl-Shift-T (IIRC) and just type file name, a list of matching files in the project folder and below is shown. For example, gedit has file tree browser, but I can't quickly type XYZ to find all XYZ files there; moreover it doesn't automatically switch focus to/from editor; so it's more mouse-oriented; I need 100% keyboard way. I need syntax highlighting for C#/HTML/JS. Most importantly, I need HTML tags autocompletion. I can live without it, but I'll be sorry. I need to run compilation remotely (via ssh I think, invoking NAnt script which does MSBuild) and grab results such as errors and warnings, and I'd prefer to quickly go to error line/file. In short, I need to edit/search/open/svn/compile/run files in some folder. Looks like a case for command-line, but imaging I'm in /trunk and want to open file.cs inside /trunk/foo/bar/boo/far, I wouldn't want to type all this path even with bash autocompletion help. I'd prefer to enter :open file.cs, and maybe then select from list of file.cs and file1.cs. Well, maybe I'll add more soon. Actually, I don't have exact requirements; for example, I don't even know if I need to ask for IDE or editor; or should it have svn support integrated or I need to use it from console; do people work with files from console (search/commit/delete/etc) and open editor from there, or they work from editor/IDE and manage files (search/commit/delete) from there? What's better? I have a feeling that vim might have everything I need. I'd like to confirm that, before I spend a lot of time learning it. No I don't want Emacs. Any other IDE? I like Eclipse, but is it good for such stuff? And after all, do you feel like it's a good way to go? I enjoy Ubuntu, enjoy learning new stuff, and Ubuntu + Windows in VirtualBox actually runs faster than Windows7 on my mahcine, but maybe I need to keep development (editor/files management/etc) in Windows/VirtualBox only, leaving other stuff for Ubuntu?

    Read the article

  • Databind gridview with LINQ

    - by Anders Svensson
    I have two database tables, one for Users of a web site, containing the fields "UserID", "Name" and the foreign key "PageID". And the other with the fields "PageID" (here the primary key), and "Url". I want to be able to show the data in a gridview with data from both tables, and I'd like to do it with databinding in the aspx page. I'm not sure how to do this, though, and I can't find any good examples of this particular situation. Here's what I have so far: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="LinqBinding._Default" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Testing LINQ </h2> <asp:GridView ID="GridView1" runat="server" DataSourceID="LinqDataSourceUsers" AutoGenerateColumns="false"> <Columns> <asp:CommandField ShowSelectButton="True" /> <asp:BoundField DataField="UserID" HeaderText="UserID" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="PageID" HeaderText="PageID" /> <asp:TemplateField HeaderText="Pages"> <ItemTemplate <asp:DropDownList ID="DropDownList1" DataSourceID="LinqDataSourcePages" SelectedValue='<%#Bind("PageID") %>' DataTextField="Url" DataValueField="PageID" runat="server"> </asp:DropDownList> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> <asp:LinqDataSource ID="LinqDataSourcePages" runat="server" ContextTypeName="LinqBinding.UserDataContext" EntityTypeName="" TableName="Pages"> </asp:LinqDataSource> <asp:LinqDataSource ID="LinqDataSourceUsers" runat="server" ContextTypeName="LinqBinding.UserDataContext" EntityTypeName="" TableName="Users"> </asp:LinqDataSource> </asp:Content> But this only works in so far as it gets the user table into the gridview (that's not a problem), and I get the page data into the dropdown, but here's the problem: I of course get ALL the page data in there, not just the pages for each user on each row. So how do I put some sort of "where" constraint on dropdown for each row to only show the pages for the user in that row? (Also, to be honest I'm not sure I'm getting the foreign key relationship right, because I'm not too used to working with relationships). EDIT: I think I have set up the relationship incorrectly. I keep getting the message that "Pages" doesn't exist as a property on the User object. And I guess it can't since the relationship right now is one way. So I tried to create a many-to-many relationship. Again, my database knowledge is a bit limited, but I added a so called "junction table" with the fields UserID and PageID, same as the other tables' primary keys. I wasn't able to make both of these primary keys in the junction table though (which it looked like some people had in examples I've seen...but since it wasn't possible I guessed they shouldn't be). Anyway, I created a relationship from each table and created new LINQ classes from that. But then what do I do? I set the junction table as the Linq data source, since I guessed I had to do this to access both tables, but that doesn't work. Then it complains there is no Name property on that object. So how do I access the related tables? Here's what I have now with the many-to-many relationship: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="ManyToMany._Default" %> <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> </asp:Content> <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> <h2> Many to many LINQ </h2> <asp:GridView ID="GridView1" runat="server" DataSourceID="LinqDataSource1" AutoGenerateColumns="false"> <Columns> <asp:CommandField ShowSelectButton="True" /> <asp:BoundField DataField="UserID" HeaderText="UserID" /> <asp:BoundField DataField="Name" HeaderText="Name" /> <asp:BoundField DataField="PageID" HeaderText="PageID" /> <asp:TemplateField HeaderText="Pages"> <ItemTemplate> <asp:DropDownList ID="DropDownList1" DataSource='<%#Eval("Pages") %>' SelectedValue='<%#Bind("PageID") %>' DataTextField="Url" DataValueField="PageID" runat="server"> </asp:DropDownList> </ItemTemplate> </asp:TemplateField> </Columns> </asp:GridView> <asp:LinqDataSource ID="LinqDataSource1" runat="server" ContextTypeName="ManyToMany.UserPageDataContext" EntityTypeName="" TableName="UserPages"> </asp:LinqDataSource> </asp:Content>

    Read the article

  • LINQ 4 XML - What is the proper way to query deep in the tree structure?

    - by Keith Barrows
    I have an XML structure that is 4 deep: <?xml version="1.0" encoding="utf-8"?> <EmailRuleList xmlns:xsd="EmailRules.xsd"> <TargetPST name="Tech Communities"> <Parse emailAsList="true" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="@aspadvice.com" folder="Lists, ASP" saveAttachments="false" /> <EmailRule address="@sqladvice.com" folder="Lists, SQL" saveAttachments="false" /> <EmailRule address="@xmladvice.com" folder="Lists, XML" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="[email protected]" folder="Special Interest Groups|Northern Colorado Architects Group" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Support|SpamBayes" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="true" toAddress="false"> <EmailRule address="[email protected]" folder="Support|GoDaddy" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Support|No-IP.com" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Discussions|Orchard Project" saveAttachments="false" /> </Parse> <Parse emailAsList="false" useJustDomain="true" fromAddress="true" toAddress="false"> <EmailRule address="@agilejournal.com" folder="Newsletters|Agile Journal" saveAttachments="false"/> <EmailRule address="@axosoft.ccsend.com" folder="Newsletters|Axosoft Newsletter" saveAttachments="false"/> <EmailRule address="@axosoft.com" folder="Newsletters|Axosoft Newsletter" saveAttachments="false"/> <EmailRule address="@cmcrossroads.com" folder="Newsletters|CM Crossroads" saveAttachments="false" /> <EmailRule address="@urbancode.com" folder="Newsletters|Urbancode" saveAttachments="false" /> <EmailRule address="@urbancode.ccsend.com" folder="Newsletters|Urbancode" saveAttachments="false" /> <EmailRule address="@Infragistics.com" folder="Newsletters|Infragistics" saveAttachments="false" /> <EmailRule address="@zdnet.online.com" folder="Newsletters|ZDNet Tech Update Today" saveAttachments="false" /> <EmailRule address="@sqlservercentral.com" folder="Newsletters|SQLServerCentral.com" saveAttachments="false" /> <EmailRule address="@simple-talk.com" folder="Newsletters|Simple-Talk Newsletter" saveAttachments="false" /> </Parse> </TargetPST> <TargetPST name="[Sharpen the Saw]"> <Parse emailAsList="false" useJustDomain="false" fromAddress="false" toAddress="true"> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Social|LinkedIn USMC" saveAttachments="false"/> </Parse> <Parse emailAsList="false" useJustDomain="false" fromAddress="true" toAddress="false"> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Head Geek|Job Alerts" saveAttachments="false" /> <EmailRule address="[email protected]" folder="Social|Cruise Critic" saveAttachments="false"/> </Parse> <Parse emailAsList="false" useJustDomain="true" fromAddress="true" toAddress="false"> <EmailRule address="@moody.edu" folder="Social|5 Love Languages" saveAttachments="false" /> <EmailRule address="@postmaster.twitter.com" folder="Social|Twitter" saveAttachments="false"/> <EmailRule address="@diabetes.org" folder="Physical|American Diabetes Association" saveAttachments="false"/> <EmailRule address="@membership.webshots.com" folder="Social|Webshots" saveAttachments="false"/> </Parse> </TargetPST> </EmailRuleList> Now, I have both an FromAddress and a ToAddress that is parsed from an incoming email. I would like to do a LINQ query against a class set that was deserialized from this XML. For instance: ToAddress = [email protected] FromAddress = [email protected] Query: Get EmailRule.Include(Parse).Include(TargetPST) where address == ToAddress AND Parse.ToAddress==true AND Parse.useJustDomain==false Get EmailRule.Include(Parse).Include(TargetPST) where address == [ToAddress Domain Only] AND Parse.ToAddress==true AND Parse.useJustDomain==true Get EmailRule.Include(Parse).Include(TargetPST) where address == FromAddress AND Parse.FromAddress==true AND Parse.useJustDomain==false Get EmailRule.Include(Parse).Include(TargetPST) where address == [FromAddress Domain Only] AND Parse.FromAddress==true AND Parse.useJustDomain==true I am having a hard time figuring this LINQ query out. I can, of course, loop on all the bits in the XML like so (includes deserialization into objects): XmlSerializer s = new XmlSerializer(typeof(EmailRuleList)); TextReader r = new StreamReader(path); _emailRuleList = (EmailRuleList)s.Deserialize(r); TargetPST[] PSTList = _emailRuleList.Items; foreach (TargetPST targetPST in PSTList) { olRoot = GetRootFolder(targetPST.name); if (olRoot != null) { Parse[] ParseList = targetPST.Items; foreach (Parse parseRules in ParseList) { EmailRule[] EmailRuleList = parseRules.Items; foreach (EmailRule targetFolders in EmailRuleList) { } } } } However, this means going through all these loops for each and every address. It makes more sense to me to query against the Objects. Any tips appreciated!

    Read the article

  • SQL CLR Assembly Error 80131051 when late binding to a registered C# COM .dll

    - by Shanubus
    I must have hit an unusual one, because I can't find any reference to this specific failing anywhere... Scenario: I have a legacy SQL function used to transform(encrypt) data. This function is called from within many stored procedures used by multiple applications. I say this, because the obvious answer of 'just call it from your code' is not really an option (or at least one I'd prefer not explore). The legacy function used sp_OA with an ActiveX dll on SQL2000 to perform its work. The new function is targeted at SQL2008 x64. I am ditching the sp_OA call in favor of CLR assembly; and am getting rid of the ActiveX dll and using a COM+ .dll (3rd party) to perform the same work. This 3rd party COM+ is required to be used based on spec given to me, so can't get rid of this piece either. Problem: After multiple attempts at getting this to work I have eliminated the following approaches 1) Create a Sql Assembly to call the local COM+ directly -- Can't do this as it requires a reference to System.EnterpriseServices. Including this requires that a whole slew of unsupported assemblies be registered which I don't want. The COM+ requires it's methods to be accessed via an Interface, so my attempts at late binding to it directly have not been successful (late binding would allow me to drop the unsupported references). 2) Create a Sql Assembly which references a C# class library that then calls the COM+. -- Same issue as #1; since the referenced dll uses System.EnterpriseServices and will be added as a dependency when referenced in the Sql Assembly, again trying to load all the unsupported libraries 3) Create a Sql Assembly which late binds to an ActiveX COM dll that calls the COM+. -- Worked in my dev environment, but can't go to x64 in production with ActiveX dll's written in VB6 (not to mention I hate backtracking anyway)... again failure... I am now onto an approach that is almost working, with of course one last hangup. I now have -a Sql Assembly that late binds to a C# COM dll, eliminating the need for including System.EnterpriseServices and eliminating the need to reference the C# COM in the SqlAssembly itself. The C# COM does reference System.EnterpriseServices to call the COM+, but since I am late binding to it from the SqlAssembly, I bypass the need for Sql to actually load them as referenced assemblies. Works in debugger.. Works on my dev box when the SqlAssembly dll is referenced in a test console app and called directly Installs to Sql2008 just fine Executing the actual UDF works, but returns no data due to a failure reporting from the late bound dll! So the SqlAssembly is instanciated just fine. It actually fails on it's late binding to the C# COM, which is working from a test console app on the same machine. It appears to be a difference in behavior based on whether called from within the SQL UDF or not. Since it is working on the same box from my console app, I am assuming it's on the SQL side. My steps to install were. --Install the COM+ dll and ensure it can be called successfully (as from with in the console app) --Register the C# COM dll (which calls the COM+) and get it to the GAC (again proofed to be working from console app) --Create my Assymetric Key CREATE ASYMMETRIC KEY SqlCryptoKey FROM EXECUTABLE FILE = 'D:\SqlEx.dll' CREATE LOGIN SqlExLogin FROM ASYMMETRIC KEY SqlExKey GRANT UNSAFE ASSEMBLY TO SqlExLogin GO --Add the assembly CREATE ASSEMBLY SqlEx FROM 'D:\SqlEx.dll' WITH PERMISSION_SET = UNSAFE; GO --Create the function CREATE FUNCTION dbo.f_SqlEx( @clearText [nvarchar](512) ) RETURNS nvarchar(512) WITH EXECUTE AS CALLER AS EXTERNAL NAME SqlEx.[SqlEx.SqlEx].Ex GO With all that done, I can now call my function SELECT dbo.f_SqlEx('test') But get this error in the event log... Retrieving the COM class factory for component with CLSID {F69D6320-5884-323F-936A-7657946604BE} failed due to the following error: 80131051. I can't really provide direct code examples, due to internal security implications; but all the code itself seems to work, I am suspecting perms or something of the like... I just find it odd that I can't find any reference to error 80131051. If someone out there believe some 'indirect' code samples will help, I will be happy to provide. Any assistance is appreciated.

    Read the article

  • Delphi Pascal - Using SetFilePointerEx and GetFileSizeEx, Getting Physical Media exact size when reading as a file

    - by SuicideClutchX2
    I am having trouble understanding how to delcare GetFileSizeEx and SetFilePointerEx in Delphi 2009 so that I can use them since they are not in the RTL or Windows.pas. I was able to compile with the following: function GetFileSizeEx(hFile: THandle; lpFileSizeHigh: Pointer): DWORD; external 'kernel32'; Then using GetFileSizeEx(PD, Pointer(DriveSize)); to get the size. But could not get it to work, the disk handle I am using is valid and I have had no problem reading the data or working under the 2gb mark with the older API's. GetFileSize of course returns 4294967295. I have had greater trouble trying to use SetFilePointerEx with the data types it uses. The overall project needs to read the data from a flash card, which is not a problem at all I can do this. My problem is that I can not find the length or size of the media I will be reading. I have code I have used in the past to do this with media under 2GB. But now that I need to read media over 2GB it is a problem. If you still dont understand I am dumping a card with all data including the boot record, etc. This is the code I would normally use to read from the physical disk to grab say the boot record and dump it to file: SetFilePointer(PD,0,nil,FILE_BEGIN); SetLength(Buffer,512); ReadFile(PD,Buffer[0],512,BytesReturned,nil); I just need to figure out how to find the end of an 8gb card and so on as well as being able to set a file pointer beyond the 2gb barrier. I guess any help in the external declarations as well as understand the values that SetFilePointerEx uses (I do not understand the whole High Low thing) would be of great help. var Form1: TForm1; function GetFileSizeEx(hFile: THandle; var FileSize: Int64): DWORD; stdcall; external 'kernel32'; implementation {$R *.dfm} function GetLD(Drive: Char): Cardinal; var Buffer : String; begin Buffer := Format('\\.\%s:',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPD(Drive: Byte): Cardinal; var Buffer : String; begin If Drive = 0 Then begin Result := INVALID_HANDLE_VALUE; Exit; end; Buffer := Format('\\.\PHYSICALDRIVE%d',[Drive]); Result := CreateFile(PChar(Buffer),GENERIC_READ Or GENERIC_WRITE,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); If Result = INVALID_HANDLE_VALUE Then begin Result := CreateFile(PChar(Buffer),GENERIC_READ,FILE_SHARE_READ,nil,OPEN_EXISTING,0,0); end; end; function GetPhysicalDiskNumber(Drive: Char): Byte; var LD : DWORD; DiskExtents : PVolumeDiskExtents; DiskExtent : TDiskExtent; BytesReturned : Cardinal; begin Result := 0; LD := GetLD(Drive); If LD = INVALID_HANDLE_VALUE Then Exit; Try DiskExtents := AllocMem(Max_Path); DeviceIOControl(LD,IOCTL_VOLUME_GET_VOLUME_DISK_EXTENTS,nil,0,DiskExtents,Max_Path,BytesReturned,nil); If DiskExtents^.NumberOfDiskExtents > 0 Then begin DiskExtent := DiskExtents^.Extents[0]; Result := DiskExtent.DiskNumber; end; Finally CloseHandle(LD); end; end; procedure TForm1.Button1Click(Sender: TObject); var PD : DWORD; BytesReturned : Cardinal; Buffer : Array Of Byte; myFile: File; DriveSize: Int64; begin PD := GetPD(GetPhysicalDiskNumber(Edit1.Text[1])); If PD = INVALID_HANDLE_VALUE Then Exit; Try GetFileSizeEx(PD, DriveSize); //SetFilePointer(PD,0,nil,FILE_BEGIN); //etLength(Buffer,512); //ZeroMemory(@Buffer,SizeOf(Buffer)); //ReadFile(PD,Buffer[0],512,BytesReturned,nil); //AssignFile(myFile, 'StickDump.bin'); //ReWrite(myFile, 512); //BlockWrite(myFile, Buffer[0], 1); //CloseFile(myFile); Finally CloseHandle(PD); End; end;

    Read the article

  • Co-Authors Wordpress Plugin: coauthors_wp_list_authors function not working correctly

    - by rayne
    The Co-Authors Plus Plugin for Wordpress has a very annoying bug. The custom function coauthors_wp_list_authors lists authors the same way the wordpress function wp_list_authors does, but it does not include authors in the list who don't have a post of their own - if they have only entries in which they are listed as co-author but not as author, they will not be included in the list. That is of course missing a very important point. I've looked at the faulty SQL statement, but unfortunately my knowledge of advanced SQL, especially when it comes to JOINs, as well as my knowledge of the wp database structure is too limited and I remain clueless. There is a topic in the WP support forum, but unfortunately the information there is very outdated and the fix is not applicable anymore. I couldn't find any other, more current solutions on the internet. I'd be glad if somewhere here could help fix the SQL statement so it also lists co-authors who don't have posts where they're the sole author, as well as display the correct post count for all authors. Here's the entire function for reference purposes with a comment marking the SQL statement: function coauthors_wp_list_authors($args = '') { global $wpdb, $coauthors_plus; $defaults = array( 'optioncount' => false, 'exclude_admin' => true, 'show_fullname' => false, 'hide_empty' => true, 'feed' => '', 'feed_image' => '', 'feed_type' => '', 'echo' => true, 'style' => 'list', 'html' => true ); $r = wp_parse_args( $args, $defaults ); extract($r, EXTR_SKIP); $return = ''; $authors = $wpdb->get_results("SELECT ID, user_nicename from $wpdb->users " . ($exclude_admin ? "WHERE user_login <> 'admin' " : '') . "ORDER BY display_name"); $author_count = array(); # this is the SQL statement which doesn't work correctly: $query = "SELECT DISTINCT $wpdb->users.ID AS post_author, $wpdb->terms.name AS user_name, $wpdb->term_taxonomy.count AS count"; $query .= " FROM $wpdb->posts"; $query .= " INNER JOIN $wpdb->term_relationships ON ($wpdb->posts.ID = $wpdb->term_relationships.object_id)"; $query .= " INNER JOIN $wpdb->term_taxonomy ON ($wpdb->term_relationships.term_taxonomy_id = $wpdb->term_taxonomy.term_taxonomy_id)"; $query .= " INNER JOIN $wpdb->terms ON ($wpdb->term_taxonomy.term_id = $wpdb->terms.term_id)"; $query .= " INNER JOIN $wpdb->users ON ($wpdb->terms.name = $wpdb->users.user_login)"; $query .= " WHERE post_type = 'post' AND " . get_private_posts_cap_sql( 'post' ); $query .= " AND $wpdb->term_taxonomy.taxonomy = '$coauthors_plus->coauthor_taxonomy'"; $query .= " GROUP BY post_author"; foreach ((array) $wpdb->get_results($query) as $row) { $author_count[$row->post_author] = $row->count; } foreach ( (array) $authors as $author ) { $link = ''; $author = get_userdata( $author->ID ); $posts = (isset($author_count[$author->ID])) ? $author_count[$author->ID] : 0; $name = $author->display_name; if ( $show_fullname && ($author->first_name != '' && $author->last_name != '') ) $name = "$author->first_name $author->last_name"; if( !$html ) { if ( $posts == 0 ) { if ( ! $hide_empty ) $return .= $name . ', '; } else $return .= $name . ', '; continue; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= '<li>'; if ( $posts == 0 ) { if ( ! $hide_empty ) $link = $name; } else { $link = '<a href="' . get_author_posts_url($author->ID, $author->user_nicename) . '" title="' . esc_attr( sprintf(__("Posts by %s", 'co-authors-plus'), $author->display_name) ) . '">' . $name . '</a>'; if ( (! empty($feed_image)) || (! empty($feed)) ) { $link .= ' '; if (empty($feed_image)) $link .= '('; $link .= '<a href="' . get_author_feed_link($author->ID) . '"'; if ( !empty($feed) ) { $title = ' title="' . esc_attr($feed) . '"'; $alt = ' alt="' . esc_attr($feed) . '"'; $name = $feed; $link .= $title; } $link .= '>'; if ( !empty($feed_image) ) $link .= "<img src=\"" . esc_url($feed_image) . "\" style=\"border: none;\"$alt$title" . ' />'; else $link .= $name; $link .= '</a>'; if ( empty($feed_image) ) $link .= ')'; } if ( $optioncount ) $link .= ' ('. $posts . ')'; } if ( !($posts == 0 && $hide_empty) && 'list' == $style ) $return .= $link . '</li>'; else if ( ! $hide_empty ) $return .= $link . ', '; } $return = trim($return, ', '); if ( ! $echo ) return $return; echo $return; }

    Read the article

  • IOC Container Handling State Params in Non-Default Constructor

    - by Mystagogue
    For the purpose of this discussion, there are two kinds of parameters an object constructor might take: state dependency or service dependency. Supplying a service dependency with an IOC container is easy: DI takes over. But in contrast, state dependencies are usually only known to the client. That is, the object requestor. It turns out that having a client supply the state params through an IOC Container is quite painful. I will show several different ways to do this, all of which have big problems, and ask the community if there is another option I'm missing. Let's begin: Before I added an IOC container to my project code, I started with a class like this: class Foobar { //parameters are state dependencies, not service dependencies public Foobar(string alpha, int omega){...}; //...other stuff } I decide to add a Logger service depdendency to the Foobar class, which perhaps I'll provide through DI: class Foobar { public Foobar(string alpha, int omega, ILogger log){...}; //...other stuff } But then I'm also told I need to make class Foobar itself "swappable." That is, I'm required to service-locate a Foobar instance. I add a new interface into the mix: class Foobar : IFoobar { public Foobar(string alpha, int omega, ILogger log){...}; //...other stuff } When I make the service locator call, it will DI the ILogger service dependency for me. Unfortunately the same is not true of the state dependencies Alpha and Omega. Some containers offer a syntax to address this: //Unity 2.0 pseudo-ish code: myContainer.Resolve<IFoobar>( new parameterOverride[] { {"alpha", "one"}, {"omega",2} } ); I like the feature, but I don't like that it is untyped and not evident to the developer what parameters must be passed (via intellisense, etc). So I look at another solution: //This is a "boiler plate" heavy approach! class Foobar : IFoobar { public Foobar (string alpha, int omega){...}; //...stuff } class FoobarFactory : IFoobarFactory { public IFoobar IFoobarFactory.Create(string alpha, int omega){ return new Foobar(alpha, omega); } } //fetch it... myContainer.Resolve<IFoobarFactory>().Create("one", 2); The above solves the type-safety and intellisense problem, but it (1) forced class Foobar to fetch an ILogger through a service locator rather than DI and (2) it requires me to make a bunch of boiler-plate (XXXFactory, IXXXFactory) for all varieties of Foobar implementations I might use. Should I decide to go with a pure service locator approach, it may not be a problem. But I still can't stand all the boiler-plate needed to make this work. So then I try this: //code named "concrete creator" class Foobar : IFoobar { public Foobar(string alpha, int omega, ILogger log){...}; static IFoobar Create(string alpha, int omega){ //unity 2.0 pseudo-ish code. Assume a common //service locator, or singleton holds the container... return Container.Resolve<IFoobar>( new parameterOverride[] {{"alpha", alpha},{"omega", omega} } ); } //Get my instance: Foobar.Create("alpha",2); I actually don't mind that I'm using the concrete "Foobar" class to create an IFoobar. It represents a base concept that I don't expect to change in my code. I also don't mind the lack of type-safety in the static "Create", because it is now encapsulated. My intellisense is working too! Any concrete instance made this way will ignore the supplied state params if they don't apply (a Unity 2.0 behavior). Perhaps a different concrete implementation "FooFoobar" might have a formal arg name mismatch, but I'm still pretty happy with it. But the big problem with this approach is that it only works effectively with Unity 2.0 (a mismatched parameter in Structure Map will throw an exception). So it is good only if I stay with Unity. The problem is, I'm beginning to like Structure Map a lot more. So now I go onto yet another option: class Foobar : IFoobar, IFoobarInit { public Foobar(ILogger log){...}; public IFoobar IFoobarInit.Initialize(string alpha, int omega){ this.alpha = alpha; this.omega = omega; return this; } } //now create it... IFoobar foo = myContainer.resolve<IFoobarInit>().Initialize("one", 2) Now with this I've got a somewhat nice compromise with the other approaches: (1) My arguments are type-safe / intellisense aware (2) I have a choice of fetching the ILogger via DI (shown above) or service locator, (3) there is no need to make one or more seperate concrete FoobarFactory classes (contrast with the verbose "boiler-plate" example code earlier), and (4) it reasonably upholds the principle "make interfaces easy to use correctly, and hard to use incorrectly." At least it arguably is no worse than the alternatives previously discussed. One acceptance barrier yet remains: I also want to apply "design by contract." Every sample I presented was intentionally favoring constructor injection (for state dependencies) because I want to preserve "invariant" support as most commonly practiced. Namely, the invariant is established when the constructor completes. In the sample above, the invarient is not established when object construction completes. As long as I'm doing home-grown "design by contract" I could just tell developers not to test the invariant until the Initialize(...) method is called. But more to the point, when .net 4.0 comes out I want to use its "code contract" support for design by contract. From what I read, it will not be compatible with this last approach. Curses! Of course it also occurs to me that my entire philosophy is off. Perhaps I'd be told that conjuring a Foobar : IFoobar via a service locator implies that it is a service - and services only have other service dependencies, they don't have state dependencies (such as the Alpha and Omega of these examples). I'm open to listening to such philosophical matters as well, but I'd also like to know what semi-authorative reference to read that would steer me down that thought path. So now I turn it to the community. What approach should I consider that I havn't yet? Must I really believe I've exhausted my options?

    Read the article

  • Django app that can provide user friendly, multiple / mass file upload functionality to other apps

    - by hopla
    Hi, I'm going to be honest: this is a question I asked on the Django-Users mailinglist last week. Since I didn't get any replies there yet, I'm reposting it on Stack Overflow in the hope that it gets more attention here. I want to create an app that makes it easy to do user friendly, multiple / mass file upload in your own apps. With user friendly I mean upload like Gmail, Flickr, ... where the user can select multiple files at once in the browse file dialog. The files are then uploaded sequentially or in parallel and a nice overview of the selected files is shown on the page with a progress bar next to them. A 'Cancel' upload button is also a possible option. All that niceness is usually solved by using a Flash object. Complete solutions are out there for the client side, like: SWFUpload http://swfupload.org/ , FancyUpload http://digitarald.de/project/fancyupload/ , YUI 2 Uploader http://developer.yahoo.com/yui/uploader/ and probably many more. Ofcourse the trick is getting those solutions integrated in your project. Especially in a framework like Django, double so if you want it to be reusable. So, I have a few ideas, but I'm neither an expert on Django nor on Flash based upload solutions. I'll share my ideas here in the hope of getting some feedback from more knowledgeable and experienced people. (Or even just some 'I want this too!' replies :) ) You will notice that I make a few assumptions: this is to keep the (initial) scope of the application under control. These assumptions are of course debatable: All right, my idea's so far: If you want to mass upload multiple files, you are going to have a model to contain each file in. I.e. the model will contain one FileField or one ImageField. Models with multiple (but ofcourse finite) amount of FileFields/ ImageFields are not in need of easy mass uploading imho: if you have a model with 100 FileFields you are doing something wrong :) Examples where you would want my envisioned kind of mass upload: An app that has just one model 'Brochure' with a file field, a title field (dynamically created from the filename) and a date_added field. A photo gallery app with models 'Gallery' and 'Photo'. You pick a Gallery to add pictures to, upload the pictures and new Photo objects are created and foreign keys set to the chosen Gallery. It would be nice to be able to configure or extend the app for your favorite Flash upload solution. We can pick one of the three above as a default, but implement the app so that people can easily add additional implementations (kinda like Django can use multiple databases). Let it be agnostic to any particular client side solution. If we need to pick one to start with, maybe pick the one with the smallest footprint? (smallest download of client side stuff) The Flash based solutions asynchronously (and either sequentially or in parallel) POST the files to a url. I suggest that url to be local to our generic app (so it's the same for every app where you use our app in). That url will go to a view provided by our generic app. The view will do the following: create a new model instance, add the file, OPTIONALLY DO EXTRA STUFF and save the instance. DO EXTRA STUFF is code that the app that uses our app wants to run. It doesn't have to provide any extra code, if the model has just a FileField/ImageField the standard view code will do the job. But most app will want to do extra stuff I think, like filling in the other fields: title, date_added, foreignkeys, manytomany, ... I have not yet thought about a mechanism for DO EXTRA STUFF. Just wrapping the generic app view came to mind, but that is not developer friendly, since you would have to write your own url pattern and your own view. Then you have to tell the Flash solutions to use a new url etc... I think something like signals could be used here? Forms/Admin: I'm still very sketchy on how all this could best be integrated in the Admin or generic Django forms/widgets/... (and this is were my lack of Django experience shows): In the case of the Gallery/Photo app: You could provide a mass Photo upload widget on the Gallery detail form. But what if the Gallery instance is not saved yet? The file upload view won't be able to set the foreignkeys on the Photo instances. I see that the auth app, when you create a user, first asks for username and password and only then provides you with a bigger form to fill in emailadres, pick roles etc. We could do something like that. In the case of an app with just one model: How do you provide a form in the Django admin to do your mass upload? You can't do it with the detail form of your model, that's just for one model instance. There's probably dozens more questions that need to be answered before I can even start on this app. So please tell me what you think! Give me input! What do you like? What not? What would you do different? Is this idea solid? Where is it not? Thank you!

    Read the article

  • move element li into ul with jquery

    - by ron
    Hi everybody i'm looking for a solution to move an element that i need to move into a .... im using jquery and i leave here the code i tried with different things but wasn't working it. this is the big menu <ul class="sf-menu"> <li><a href="">Student Centre</a> <ul> <li><a href="9">STUDENT CENTRAL WEBSITE</a></li> <li><a href="10">STUDENT CENTRAL EMAIL</a></li> <li><a href="11">CCM STUDENT SURVIAL TIPS</a></li> <li><a href="12">VET TUTTTION ASSURANSE</a></li> <li><a href="13">WHAT GOING ON AT CCM</a></li> <li><a href="14">IMPORTANT STUDENTS NOTICE</a></li> </ul> </li> <li><a href="">Research</a> <ul> <li><a href="_16">WHAT IS HOLISTIC KINESIOLOGY ?</a></li> <li><a href="_17">TRANF. CHILDREN W/ LEARNING DIFFICULTIES</a></li> <li><a href="_18">HEALING WITH HOLISTIC KINESIOLOGY</a></li> <li><a href="_19">UNDERSTANDING ASPERGER'S SYNDROME</a></li> <li><a href="_20">DAVID CORBY THE DIRECTOR OF CCM</a></li> <li><a href="_21">HELPING PEOPLE CREATE THEIR OWN MIRACLES</a></li> <li><a href="_22">MAGNESIUM AND COLLOIDAL MINERALS</a></li> <li><a href="_23">BRAIN ENERGETICS CHAKRAS AND NADIS</a></li> <li><a href="_24">KINESIOLOGY FAQ'S</a></li>' </ul> </li> <li><a href="25">Contact Us</a></li> <li><a href="26">A - Z</a></li> </ul> and i need to add this li to ul but i need to put in the second place after the first not nested <li id="faculty"> <a href="#">Faculty Courses </a> <ul> <li><a href="">INTENSE SHORT COURSES</a><ul> <li><a href="_59">CRYSTAL KINESIOLOGY ONE</a></li> <li><a href="_60">APPLIED PHYSIOLOGY</a></li> <li><a href="_61">VIBRATIONAL HEALING SYSTEMS 1</a></li> <li><a href="_62">VIBRATIONAL HEALING SYSTEMS 2</a></li> <li><a href="_63">VIBRATIONAL HEALING SYSTEMS 3</a></li> <li><a href="_64">VIBRATIONAL HEALING SYSTEMS 6</a></li> <li><a href="_65">NUTRITIONAL KINESIOLOGY</a></li> <li><a href="_66">QUANTUM HARMONICS</a></li><li><a href="_67">CHAKRA HOLOGRAM</a></li> <li><a href="_68">CLINICAL APPLICATIONS OF KINESIOLOGY</a></li> <li><a href="_69">COUNSELLING KINESIOLOGY</a></li> <li><a href="_70">HARMONISING CHI FLOW</a></li> </ul> </li> <li><a href="=53">FREE INTRUCTION COURSE DAYS</a> <ul> <li><a href="=53_56">HOLISTIC KINESIOLOGY</a></li> <li><a href="=53_57">TRANSPERSONAL COUNSELLING</a></li> <li><a href="=53_58">SHAMMANISM &amp; TRANSFORMATIONAL MASK</a></li> </ul> </li> <li><a href="=50">DIPLOMA MASK AND TRADITIONAL HEALING</a></li> <li><a href="=43">DIPLOMA TRANSPERSONAL ART THERAPY</a></li> <li><a href="=42">DIPLOMA HOLISTIC KINESIOLOGY</a></li> <li><a href="=47">ADVANCE DIPLOMA HOLISTIC KINESIOLOGY</a></li> <li><a href="=48">DIPLOMA DINAMIC AND FUNCTIONAL</a></li> <li><a href="=49">CERTIFICATE MASK AND TRADITIONAL HEALING</a></li> <li><a href="=51">DIPLOMA TRANSPERSONAL COUNSELLING</a></li> <li><a href="=52">STUDENT CLINICS</a></li> </ul> </li> please i need you help Thanks in advance

    Read the article

  • Few Google Checkout Questions

    - by Richard Knop
    I am planning to integrate a Google Checkout payment system on a social networking website. The idea is that members can buy "tokens" for real money (which are sort of the website currency) and then they can buy access to some extra content on the website etc. What I want to do is create a Google Checkout button that takes a member to the checkout page where he pays with his credit or debit card. What I want is the Google Checkout to notify notify my server whether the purchase of tokens was successful (if the credit/debit card was charged) so I can update the local database. The website is coded in PHP/MySQL. I have downloaded the sample PHP code from here: code.google.com/p/google-checkout-php-sample-code/wiki/Documentation I know how to create a Google checkout button and I have also placed the responsehandlerdemo.php file on my server. This is the file the Google Checkout is supposed to send response to (of course I set the path to the file in Google merchant account). Now in the response handler file there is a switch block with several case statements. Which one means that the payment was successful and I can add tokens to the member account in the local database? switch ($root) { case "request-received": { break; } case "error": { break; } case "diagnosis": { break; } case "checkout-redirect": { break; } case "merchant-calculation-callback": { // Create the results and send it $merchant_calc = new GoogleMerchantCalculations($currency); // Loop through the list of address ids from the callback $addresses = get_arr_result($data[$root]['calculate']['addresses']['anonymous-address']); foreach($addresses as $curr_address) { $curr_id = $curr_address['id']; $country = $curr_address['country-code']['VALUE']; $city = $curr_address['city']['VALUE']; $region = $curr_address['region']['VALUE']; $postal_code = $curr_address['postal-code']['VALUE']; // Loop through each shipping method if merchant-calculated shipping // support is to be provided if(isset($data[$root]['calculate']['shipping'])) { $shipping = get_arr_result($data[$root]['calculate']['shipping']['method']); foreach($shipping as $curr_ship) { $name = $curr_ship['name']; //Compute the price for this shipping method and address id $price = 12; // Modify this to get the actual price $shippable = "true"; // Modify this as required $merchant_result = new GoogleResult($curr_id); $merchant_result->SetShippingDetails($name, $price, $shippable); if($data[$root]['calculate']['tax']['VALUE'] == "true") { //Compute tax for this address id and shipping type $amount = 15; // Modify this to the actual tax value $merchant_result->SetTaxDetails($amount); } if(isset($data[$root]['calculate']['merchant-code-strings'] ['merchant-code-string'])) { $codes = get_arr_result($data[$root]['calculate']['merchant-code-strings'] ['merchant-code-string']); foreach($codes as $curr_code) { //Update this data as required to set whether the coupon is valid, the code and the amount $coupons = new GoogleCoupons("true", $curr_code['code'], 5, "test2"); $merchant_result->AddCoupons($coupons); } } $merchant_calc->AddResult($merchant_result); } } else { $merchant_result = new GoogleResult($curr_id); if($data[$root]['calculate']['tax']['VALUE'] == "true") { //Compute tax for this address id and shipping type $amount = 15; // Modify this to the actual tax value $merchant_result->SetTaxDetails($amount); } $codes = get_arr_result($data[$root]['calculate']['merchant-code-strings'] ['merchant-code-string']); foreach($codes as $curr_code) { //Update this data as required to set whether the coupon is valid, the code and the amount $coupons = new GoogleCoupons("true", $curr_code['code'], 5, "test2"); $merchant_result->AddCoupons($coupons); } $merchant_calc->AddResult($merchant_result); } } $Gresponse->ProcessMerchantCalculations($merchant_calc); break; } case "new-order-notification": { $Gresponse->SendAck(); break; } case "order-state-change-notification": { $Gresponse->SendAck(); $new_financial_state = $data[$root]['new-financial-order-state']['VALUE']; $new_fulfillment_order = $data[$root]['new-fulfillment-order-state']['VALUE']; switch($new_financial_state) { case 'REVIEWING': { break; } case 'CHARGEABLE': { //$Grequest->SendProcessOrder($data[$root]['google-order-number']['VALUE']); //$Grequest->SendChargeOrder($data[$root]['google-order-number']['VALUE'],''); break; } case 'CHARGING': { break; } case 'CHARGED': { break; } case 'PAYMENT_DECLINED': { break; } case 'CANCELLED': { break; } case 'CANCELLED_BY_GOOGLE': { //$Grequest->SendBuyerMessage($data[$root]['google-order-number']['VALUE'], // "Sorry, your order is cancelled by Google", true); break; } default: break; } switch($new_fulfillment_order) { case 'NEW': { break; } case 'PROCESSING': { break; } case 'DELIVERED': { break; } case 'WILL_NOT_DELIVER': { break; } default: break; } break; } case "charge-amount-notification": { //$Grequest->SendDeliverOrder($data[$root]['google-order-number']['VALUE'], // <carrier>, <tracking-number>, <send-email>); //$Grequest->SendArchiveOrder($data[$root]['google-order-number']['VALUE'] ); $Gresponse->SendAck(); break; } case "chargeback-amount-notification": { $Gresponse->SendAck(); break; } case "refund-amount-notification": { $Gresponse->SendAck(); break; } case "risk-information-notification": { $Gresponse->SendAck(); break; } default: $Gresponse->SendBadRequestStatus("Invalid or not supported Message"); break; } I guess that case 'CHARGED' is the one, am I right? Second question, do I need an SSL certificate to receive response from Google Checkout? According to this I do: groups.google.com/group/google-checkout-api-php/browse_thread/thread/10ce55177281c2b0 But I don's see it mentioned anywhere in the official documentation. Thank you.

    Read the article

  • Ivy and Snapshots (Nexus)

    - by Uberpuppy
    Hey folks, I'm using ant, ivy and nexus repo manager to build and store my artifacts. I managed to get everything working: dependency resolution and publishing. Until I hit a problem... (of course!). I was publishing to a 'release' repo in nexus, which is locked to 'disable redeploy' (even if you change the setting to 'allow redeploy' (really lame UI there imo). You can imagine how pissed off I was getting when my changes weren't updating through the repo before I realised that this was happening. Anyway, I now have to switch everything to use a 'Snapshot' repo in nexus. Problem is that this messes up my publish. I've tried a variety of things, including extensive googling, and haven't got anywhere whatsoever. The error I get is a bad PUT request, error code 400. Can someone who has got this working please give me a pointer on what I'm missing. Many thanks, Alastair fyi, here's my config: Note that I have removed any attempts at getting snapshots to work as I didn't know what was actually (potentially) useful and what was complete guff. This is therefore the working release-only setup. Also, please note that I've added the XXX-API ivy.xml for info only. I can't even get the xxx-common to publish (and that doesn't even have dependencies). Ant task: <target name="publish" depends="init-publish"> <property name="project.generated.ivy.file" value="${project.artifact.dir}/ivy.xml"/> <property name="project.pom.file" value="${project.artifact.dir}/${project.handle}.pom"/> <echo message="Artifact dir: ${project.artifact.dir}"/> <ivy:deliver deliverpattern="${project.generated.ivy.file}" organisation="${project.organisation}" module="${project.artifact}" status="integration" revision="${project.revision}" pubrevision="${project.revision}" /> <ivy:resolve /> <ivy:makepom ivyfile="${project.generated.ivy.file}" pomfile="${project.pom.file}"/> <ivy:publish resolver="${ivy.omnicache.publisher}" module="${project.artifact}" organisation="${project.organisation}" revision="${project.revision}" pubrevision="${project.revision}" pubdate="now" overwrite="true" publishivy="true" status="integration" artifactspattern="${project.artifact.dir}/[artifact]-[revision](-[classifier]).[ext]" /> </target> Couple of ivy files to give an idea of internal dependencies: XXX-Common project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_common" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_common" type="jar" ext="jar"/> <artifact name="xxx_common" type="pom" ext="pom"/> </publications> <dependencies> </dependencies> </ivy-module> XXX-API project: <ivy-module version="2.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="http://ant.apache.org/ivy/schemas/ivy.xsd"> <info organisation="com.myorg.xxx" module="xxx_api" status="integration" revision="1.0"> </info> <publications> <artifact name="xxx_api" type="jar" ext="jar"/> <artifact name="xxx_api" type="pom" ext="pom"/> </publications> <dependencies> <dependency org="com.myorg.xxx" name="xxx_common" rev="1.0" transitive="true" /> </dependencies> </ivy-module> IVY Settings.xml: <ivysettings> <properties file="${ivy.project.dir}/project.properties" /> <settings defaultResolver="chain" defaultConflictManager="all" /> <credentials host="${ivy.credentials.host}" realm="Sonatype Nexus Repository Manager" username="${ivy.credentials.username}" passwd="${ivy.credentials.passwd}" /> <caches> <cache name="ivy.cache" basedir="${ivy.cache.dir}" /> </caches> <resolvers> <ibiblio name="xxx_publisher" m2compatible="true" root="${ivy.xxx.publish.url}" /> <chain name="chain"> <url name="xxx"> <ivy pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/ivy-[revision].xml" /> <artifact pattern="${ivy.xxx.repo.url}/com/myorg/xxx/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <ibiblio name="xxx" m2compatible="true" root="${ivy.xxx.repo.url}"/> <ibiblio name="public" m2compatible="true" root="${ivy.master.repo.url}" /> <url name="com.springsource.repository.bundles.release"> <ivy pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/release/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> <url name="com.springsource.repository.bundles.external"> <ivy pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> <artifact pattern="http://repository.springsource.com/ivy/bundles/external/[organisation]/[module]/[revision]/[artifact]-[revision].[ext]" /> </url> </chain> </resolvers> </ivysettings>

    Read the article

  • Content display problems when using Suckerfish menus with 960.gs and IE

    - by Cedar Jensen
    I'm using 960.gs layout and when I add the suckerfish menu as part of the content to one of the grids, the contents of adjacent siblings bleed through the menu in all versions of IE. In the listed html below, the text from 'belowFoldSection' will appear through the menu when it is visible and has enough items to make it span over 2nd section. However, the contents of 'introSummary' will be underneath the menu, as expected. I've set the z-index for #nav and #nav ul in my css and this of course makes it work in FF, Chrome and Safari, but not in IE (because IE incorrectly assigns child elements its own z-index). If I change the .grid_nn class 'position' attribute (set by default in the 960 template) from 'relative' to absolute, this fixes it in IE. However, it is my understanding that I don't want the child elements of the 'container_12' to be taken out of the flow of the document and want them positioned relative to the .container_12's starting point. (Changing the attribute to absolute causes other general layout problems) Can anyone suggest a work-around? My html: <div class="container_12"> <!--First section where menu lives--> <div class="grid_12" id="mainSection"> <div class="grid_4 alpha" id="intro"> <p>Start of menu here</p> <div id="subMenu"> <ul id="nav"> <li><a href="#">Item 1</a> <ul> <li><a href="#">Burrowing gobies</a></li> <li><a href="#">Dartfishes</a></li> <li><a href="#">Eellike gobies</a></li> <!--10 more for longer list --> </ul> </li> <li><a href="#">Item 2</a> <ul> <li><a href="#">Remoras</a></li> <li><a href="#">Tilefishes</a></li> <!--10 more for longer list --> </ul> </li> <li><a href="#">Item 3</a> <ul> <li><a href="#">Climbing perches</a></li> <li><a href="#">Labyrinthfishes</a></li> <li><a href="#">Kissing gouramis</a></li> <!--10 more for longer list --> </ul> </li> </ul> <div id="introSummary"> <h1>PERCIFORMES! (1)</h1> <p>Welcome to the world of Perciformes - perch-like fish including the world famous <strong>Suckerfish</strong></p> </div> </div> <!-- end of sub menu --> </div> <div class="grid_8 omega" id="summary"> <p>Some stuff goes here</p </div> </div> <!-- End of first section --> <div class="clear">&nbsp;</div> <div class="grid_12 spacer"> </div> <div class="grid_4" id="belowFoldSection"> <p>Here is some stuff I want to appear below the menu when the pop-up is visible</p> </div> </div> <!-- container_12 --> The suckerfish css file: #nav, #nav ul { /* all lists */ padding: 0; margin: 0; list-style: none; line-height: 1; z-index: 99; } #nav a { display: block; width: 10em; } #nav li { /* all list items */ float: left; width: 10em; } #nav li ul { /* second-level lists */ position: absolute; background: orange; width: 10em; left: -999em; } #nav li:hover ul, #nav li.sfhover ul { /* lists nested under hovered list items */ left: auto; } Default 960.gs css: .container_12, .container_16 { margin-left: auto; margin-right: auto; width: 960px; } .grid_1, .grid_2, .grid_3, .grid_4, .grid_5, .grid_6, .grid_7, .grid_8, .grid_9, .grid_10, .grid_11, .grid_12, .grid_13, .grid_14, .grid_15, .grid_16 { display: inline; float: left; position: relative; margin-left: 10px; margin-right: 10px; }

    Read the article

  • migrating webclient to WCF; WCF client serializes parametername of method

    - by Wouter
    I'm struggling with migrating from webservice/webclient architecture to WCF architecture. The object are very complex, with lots of nested xsd's and different namespaces. Proxy classes are generated by adding a Web Reference to an original wsdl with 30+ webmethods and using xsd.exe for generating the missing SOAPFault objects. My pilot WCF Service consists of only 1 webmethod which matches the exact syntax of one of the original methods: 1 object as parameter, returning 1 other object as result value. I greated a WCF Interface using those proxy classes, using attributes: XMLSerializerFormat and ServiceContract on the interface, OperationContract on one method from original wsdl specifying Action, ReplyAction, all with the proper namespaces. I create incoming client messages using SoapUI; I generated a project from the original WSDL files (causing the SoapUI project to have 30+ methods) and created one new Request at the one implemented WebMethod, changed the url to my wcf webservice and send the message. Because of the specified (Reply-)Action in the OperationContractAttribute, the message is actually received and properly deserialized into an object. To get this far (40 hours of googling), a lot of frustration led me to using a custom endpoint in which the WCF 'wrapped tags' are removed, the namespaces for nested types are corrected, and the generated wsdl get's flattened (for better compatibility with other tools then MS VisualStudio). Interface code is this: [XmlSerializerFormat(Use = OperationFormatUse.Literal, Style = OperationFormatStyle.Document, SupportFaults = true)] [ServiceContract(Namespace = Constants.NamespaceStufZKN)] public interface IOntvangAsynchroon { [OperationContract(Action = Constants.NamespaceStufZKN + "/zakLk01", ReplyAction = Constants.NamespaceStufZKN + "/zakLk01", Name = "zakLk01")] [FaultContract(typeof(Fo03Bericht), Namespace = Constants.NamespaceStuf)] Bv03Bericht zakLk01([XmlElement("zakLk01", Namespace = Constants.NamespaceStufZKN)] ZAKLk01 zakLk011); When I use a Webclient in code to send a message, everything works. My problem is, when I use a WCF client. I use ChannelFactory< IOntvangAsynchroon to send a message. But the generated xml looks different: it includes the parametername of the method! It took me a lot of time to figure this one out, but here's what happens: Correct xml (stripped soap envelope): <soap:Body> <zakLk01 xmlns="http://www.egem.nl/StUF/sector/zkn/0310"> <stuurgegevens> <berichtcode xmlns="http://www.egem.nl/StUF/StUF0301">Bv01</berichtcode> <zender xmlns="http://www.egem.nl/StUF/StUF0301"> <applicatie>ONBEKEND</applicatie> </zender> </stuurgegevens> <parameters> </parameters> </zakLk01> </soap:Body> Bad xml: <soap:Body> <zakLk01 xmlns="http://www.egem.nl/StUF/sector/zkn/0310"> <zakLk011> <stuurgegevens> <berichtcode xmlns="http://www.egem.nl/StUF/StUF0301">Bv01</berichtcode> <zender xmlns="http://www.egem.nl/StUF/StUF0301"> <applicatie>ONBEKEND</applicatie> </zender> </stuurgegevens> <parameters> </parameters> </zakLk011> </zakLk01> </soap:Body> Notice the 'zakLk011' element? It is the name of the parameter of the method in my interface! So NOW it is zakLk011, but it when my parameter name was 'zakLk01', the xml seemed to contain some magical duplicate of the tag above, but without namespace. Of course, you can imagine me going crazy over what was happening before finding out it was the parametername! I know have actually created a WCF Service, at which I cannot send messages using a WCF Client anymore. For clarity: The method does get invoked using the WCF Client on my webservice, but the parameter object is empty. Because I'm using a custom endpoint to log the incoming xml, I can see the message is received fine, but just with the wrong syntax! WCF client code: ZAKLk01 stufbericht = MessageFactory.CreateZAKLk01(); ChannelFactory<IOntvangAsynchroon> factory = new ChannelFactory<IOntvangAsynchroon>(new BasicHttpBinding(), new EndpointAddress("http://localhost:8193/Roxit/Link/zkn0310")); factory.Endpoint.Behaviors.Add(new LinkEndpointBehavior()); IOntvangAsynchroon client = factory.CreateChannel(); client.zakLk01(stufbericht); I am not using a generated client, i just reference the webservice like i am lot's of times. Can anyone please help me? I can't google anything on this...

    Read the article

  • C - array count, strtok, etc

    - by Pedro
    Hi... i have a little problem on my code... HI open a txt that have this: LEI;7671;Maria Albertina da silva;[email protected]; 9;8;12;9;12;11;6;15;7;11; LTCGM;6567;Artur Pereira Ribeiro;[email protected]; 6;13;14;12;11;16;14; LEI;7701;Ana Maria Carvalho;[email protected]; 8;13;11;7;14;12;11;16;14; LEI, LTCGM are the college; 7671, 6567, 7701 is student number; Maria, Artur e Ana are the students name; [email protected], ...@gmail are emails from students; the first number of every line is the total of classes that students have; after that is students school notes; example: College: LEI Number: 7671 Name: Maria Albertina da Silva email: [email protected] total of classes: 9 Classe Notes: 8 12 9 12 11 6 15 7 11. My code: typedef struct aluno{ char sigla[5];//college char numero[80];//number char nome[80];//student name char email[20];//email int total_notas;// total of classes char tot_not[40]; // total classes char notas[20];// classe notes int nota; //class notes char situacao[80]; //situation (aproved or disaproved) }ALUNO; void ordena(ALUNO*alunos, int tam)//bubble sort { int i=0; int j=0; char temp[100]; for( i=0;i<tam;i++) for(j=0;j<tam-1;j++) if(strcmp( alunos[i].sigla[j], alunos[i].sigla[j+1])>0){ strcpy(temp, alunos[i].sigla[j]); strcpy(alunos[i].sigla[j],alunos[i].sigla[j+1]); strcpy(alunos[i].sigla[j+1], temp); } } void xml(ALUNO*alunos, int tam){ FILE *fp; char linha[60];//line int soma, max, min, count;//biggest note and lowest note and students per course count float media; //media of notes fp=fopen("example.txt","r"); if(fp==NULL){ exit(1); } else{ while(!(feof(fp))){ soma=0; media=0; max=0; min=0; count=0; fgets(linha,60,fp); if(linha[0]=='L'){ if(ap_dados=strtok(linha,";")){ strcpy(alunos[i].sigla,ap_dados);//copy to struct // i need to call bubble sort here, but i don't know how printf("College: %s\n",alunos[i].sigla); if(ap_dados=strtok(NULL,";")){ strcpy(alunos[i].numero,ap_dados);//copy to struct printf("number: %s\n",alunos[i].numero); if(ap_dados=strtok(NULL,";")){ strcpy(alunos[i].nome, ap_dados);//copy to struct printf("name: %s\n",alunos[i].nome); if(ap_dados=strtok(NULL,";")){ strcpy(alunos[i].email, ap_dados);//copy to struct printf("email: %s\n",alunos[i].email); } } } }i++; } if(isdigit(linha[0])){ if(info_notas=strtok(linha,";")){ strcpy(alunos[i].tot_not,info_notas); alunos[i].total_notas=atoi(alunos[i].tot_not);//total classes for(z=0;z<=alunos[i].total_notas;z++){ if(info_notas=strtok(NULL,";")){ strcpy(alunos[i].notas,info_notas); alunos[i].nota=atoi(alunos[i].notas); // student class notes } soma=soma + alunos[i].nota; media=soma/alunos[i].total_notas;//doesn't work if(alunos[i].nota>max){ max=alunos[i].nota;;//doesn't work } else{ if(min<alunos[i].nota){ min=alunos[i].nota;;//doesn't work } } //now i need to count the numbers of students in the same college, but doesn't work /*If(strcmp(alunos[i].sigla, alunos[i+1].sigla)=0){ count ++; printf("%d\n", count); here for LEI should appear 2 students and for LTCGM appear 1, don't work }*/ //Now i need to see if student is aproved or disaproved // Student is disaproved if he gets 3 notes under 10, how can i do that? } printf("media %d\n",media); //media printf("Nota maxima %d\n",max);// biggest note printf("Nota minima %d\n",min); //lowest note }i++; } } } fclose(fp); } int main(int argc, char *argv[]){ ALUNO alunos; FILE *fp; int tam; fp=fopen(nomeFicheiro,"r"); alunos = (ALUNO*) calloc (tam, sizeof(ALUNO)); xml(alunos,nomeFicheiro, tam); system("PAUSE"); return 0; }

    Read the article

  • Error when running a GWTTestCase using maven gwt plugin

    - by adancu
    Hi, I've created a test which extends GWTTestCase but I'm getting this error: mvn integration-test gwt:test Running com.myproject.test.ui. GwtTestMyFirstTestCase Translatable source found in... [WARN] No source path entries; expect subsequent failures [ERROR] Unable to find type 'java.lang.Object' [ERROR] Hint: Check that your module inherits 'com.google.gwt.core.Core' either directly or indirectly (most often by inheriting module 'com.google.gwt.user.User') Tests run: 1, Failures: 0, Errors: 1, Skipped: 0, Time elapsed: 2.1 sec <<< FAILURE! GwtTestMyFirstTestCase.java is in /src/test/java, while the GWT module is located in src/main/java. I assume this shouldn't be a problem. I've done everything required according to http://mojo.codehaus.org/gwt-maven-plugin/user-guide/testing.html and of course that my gwt module already has com.google.gwt.core.Core indirectly imported. http://maven.apache.org/maven-v4_0_0.xsd" 4.0.0 com.myproject main jar 0.0.1-SNAPSHOT Main Module <properties> <gwt.module>com.myproject.MainModule</gwt.module> </properties> <parent> <groupId>com.myproject</groupId> <artifactId>app</artifactId> <version>0.1.0-SNAPSHOT</version> </parent> <dependencies> <dependency> <groupId>com.myproject</groupId> <artifactId>app-commons</artifactId> <version>0.0.1-SNAPSHOT</version> </dependency> <dependency> <groupId>com.google.gwt</groupId> <artifactId>gwt-dev</artifactId> <version>${gwt.version}</version> <scope>provided</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-dependency-plugin</artifactId> <configuration> <outputFile>../app/src/main/webapp/WEB-INF/main.tree</outputFile> </configuration> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>gwt-maven-plugin</artifactId> <executions> <execution> <goals> <goal>test</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jar-plugin</artifactId> <configuration> <classesDirectory> ${project.build.directory}/${project.build.finalName}/${gwt.module} </classesDirectory> </configuration> </plugin> </plugins> </build> Here is the test case, located in /src/test/java/com/myproject/test/ui public class GwtTestMyFirstTestCase extends GWTTestCase { @Override public String getModuleName() { return "com.myproject.MainModule"; } public void testSomething() { } } Here is the gwt module I'm trying to test, located in src/main/java/com/myproject/MainModule.gwt.xml: <inherits name='com.myproject.Commons' /> <source path="site" /> <source path="com.myproject.test.ui" /> <set-property name="gwt.suppressNonStaticFinalFieldWarnings" value="true" /> <entry-point class='com.myproject.site.SiteModuleEntry' /> Can anyone give me a hint or two about what I'm doing wrong? Thanks in advance.

    Read the article

  • Erasing and modifying elements in Boost MultiIndex Container

    - by Sarah
    I'm trying to use a Boost MultiIndex container in my simulation. My knowledge of C++ syntax is very weak, and I'm concerned I'm not properly removing an element from the container or deleting it from memory. I also need to modify elements, and I was hoping to confirm the syntax and basic philosophy here too. // main.cpp ... #include <boost/multi_index_container.hpp> #include <boost/multi_index/hashed_index.hpp> #include <boost/multi_index/member.hpp> #include <boost/multi_index/ordered_index.hpp> #include <boost/multi_index/mem_fun.hpp> #include <boost/tokenizer.hpp> #include <boost/shared_ptr.hpp> ... #include "Host.h" // class Host, all members private, using get fxns to access using boost::multi_index_container; using namespace boost::multi_index; typedef multi_index_container< boost::shared_ptr< Host >, indexed_by< hashed_unique< const_mem_fun<Host,int,&Host::getID> > // ordered_non_unique< BOOST_MULTI_INDEX_MEM_FUN(Host,int,&Host::getAge) > > // end indexed_by > HostContainer; typedef HostContainer::nth_index<0>::type HostsByID; int main() { ... HostContainer allHosts; Host * newHostPtr; newHostPtr = new Host( t, DOB, idCtr, 0, currentEvents ); allHosts.insert( boost::shared_ptr<Host>(newHostPtr) ); // allHosts gets filled up int randomHostID = 4; int newAge = 50; modifyHost( randomHostID, allHosts, newAge ); killHost( randomHostID, allHosts ); } void killHost( int id, HostContainer & hmap ){ HostsByID::iterator it = hmap.find( id ); cout << "Found host id " << (*it)->getID() << "Attempting to kill. hmap.size() before is " << hmap.size() << " and "; hmap.erase( it ); // Is this really erasing (freeing from mem) the underlying Host object? cout << hmap.size() << " after." << endl; } void modifyHost( int id, HostContainer & hmap, int newAge ){ HostsByID::iterator it = hmap.find( id ); (*it) -> setAge( newAge ); // Not actually the "modify" function for MultiIndex... } My questions are In the MultiIndex container allHosts of shared_ptrs to Host objects, is calling allHosts.erase( it ) on an iterator to the object's shared_ptr enough to delete the object permanently and free it from memory? It appears to be removing the shared_ptr from the container. The allhosts container currently has one functioning index that relies on the host's ID. If I introduce an ordered second index that calls on a member function (Host::getAge()), where the age changes over the course of the simulation, is the index always going to be updated when I refer to it? What is the difference between using the MultiIndex's modify to modify the age of the underlying object versus the approach I show above? I'm vaguely confused about what is assumed/required to be constant in MultiIndex. Thanks in advance. Update Here's my attempt to get the modify syntax working, based on what I see in a related Boost example. struct update_age { update_age():(){} // have no idea what this really does... elicits error void operator() (boost::shared_ptr<Host> ptr) { ptr->incrementAge(); // incrementAge() is a member function of class Host } }; and then in modifyHost, I'd have hmap.modify(it,update_age). Even if by some miracle this turns out to be right, I'd love some kind of explanation of what's going on.

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >