Search Results

Search found 18829 results on 754 pages for 'null terminated'.

Page 74/754 | < Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >

  • Why is there a Null Pointer Exception in this Java Code?

    - by algorithmicCoder
    This code takes in users and movies from two separate files and computes a user score for a movie. When i run the code I get the following error: Exception in thread "main" java.lang.NullPointerException at RecommenderSystem.makeRecommendation(RecommenderSystem.java:75) at RecommenderSystem.main(RecommenderSystem.java:24) I believe the null pointer exception is due to an error in this particular class but I can't spot it....any thoughts? import java.io.*; import java.lang.Math; public class RecommenderSystem { private Movie[] m_movies; private User[] m_users; /** Parse the movies and users files, and then run queries against them. */ public static void main(String[] argv) throws FileNotFoundException, ParseError, RecommendationError { FileReader movies_fr = new FileReader("C:\\workspace\\Recommender\\src\\IMDBTop10.txt"); FileReader users_fr = new FileReader("C:\\workspace\\Recommender\\src\\IMDBTop10-users.txt"); MovieParser mp = new MovieParser(movies_fr); UserParser up = new UserParser(users_fr); Movie[] movies = mp.getMovies(); User[] users = up.getUsers(); RecommenderSystem rs = new RecommenderSystem(movies, users); System.out.println("Alice would rate \"The Shawshank Redemption\" with at least a " + rs.makeRecommendation("The Shawshank Redemption", "asmith")); System.out.println("Carol would rate \"The Dark Knight\" with at least a " + rs.makeRecommendation("The Dark Knight", "cd0")); } /** Instantiate a recommender system. * * @param movies An array of Movie that will be copied into m_movies. * @param users An array of User that will be copied into m_users. */ public RecommenderSystem(Movie[] movies, User[] users) throws RecommendationError { m_movies = movies; m_users = users; } /** Suggest what the user with "username" would rate "movieTitle". * * @param movieTitle The movie for which a recommendation is made. * @param username The user for whom the recommendation is made. */ public double makeRecommendation(String movieTitle, String username) throws RecommendationError { int userNumber; int movieNumber; int j=0; double weightAvNum =0; double weightAvDen=0; for (userNumber = 0; userNumber < m_users.length; ++userNumber) { if (m_users[userNumber].getUsername().equals(username)) { break; } } for (movieNumber = 0; movieNumber < m_movies.length; ++movieNumber) { if (m_movies[movieNumber].getTitle().equals(movieTitle)) { break; } } // Use the weighted average algorithm here (don't forget to check for // errors). while(j<m_users.length){ if(j!=userNumber){ weightAvNum = weightAvNum + (m_users[j].getRating(movieNumber)- m_users[j].getAverageRating())*(m_users[userNumber].similarityTo(m_users[j])); weightAvDen = weightAvDen + (m_users[userNumber].similarityTo(m_users[j])); } j++; } return (m_users[userNumber].getAverageRating()+ (weightAvNum/weightAvDen)); } } class RecommendationError extends Exception { /** An error for when something goes wrong in the recommendation process. * * @param s A string describing the error. */ public RecommendationError(String s) { super(s); } }

    Read the article

  • Bluetooth connection. Problem with sony ericsson.

    - by Hugi
    I have bt client and server. Then i use method Connector.open, client connects to the port, but passed so that my server does not see them. Nokia for all normal, but with sony ericsson i have this problem. On bt adapter open one port (com 5). Listings Client /* * To change this template, choose Tools | Templates * and open the template in the editor. */ import java.util.Vector; import javax.bluetooth.*; import javax.microedition.midlet.*; import javax.microedition.lcdui.*; import javax.microedition.io.*; import java.io.*; /** * @author ????????????? */ public class Client extends MIDlet implements DiscoveryListener, CommandListener { private static Object lock=new Object(); private static Vector vecDevices=new Vector(); private ServiceRecord[] servRec = new ServiceRecord[INQUIRY_COMPLETED]; private Form form = new Form( "Search" ); private List voteList = new List( "Vote list", List.IMPLICIT ); private List vote = new List( "", List.EXCLUSIVE ); private RemoteDevice remoteDevice; private String connectionURL = null; protected int stopToken = 255; private Command select = null; public void startApp() { //view form Display.getDisplay(this).setCurrent(form); try { //device search print("Starting device inquiry..."); getAgent().startInquiry(DiscoveryAgent.GIAC, this); try { synchronized(lock){ lock.wait(); } }catch (InterruptedException e) { e.printStackTrace(); } //device count int deviceCount=vecDevices.size(); if(deviceCount <= 0) { print("No Devices Found ."); } else{ remoteDevice=(RemoteDevice)vecDevices.elementAt(0); print( "Server found" ); //create uuid UUID uuid = new UUID(0x1101); UUID uuids[] = new UUID[] { uuid }; //search service print( "Searching for service..." ); getAgent().searchServices(null,uuids,remoteDevice,this); } } catch( Exception e) { e.printStackTrace(); } } //if deivce discovered add to vecDevices public void deviceDiscovered(RemoteDevice btDevice, DeviceClass cod) { //add the device to the vector try { if(!vecDevices.contains(btDevice) && btDevice.getFriendlyName(true).equals("serverHugi")){ vecDevices.addElement(btDevice); } } catch( IOException e ) { } } public synchronized void servicesDiscovered(int transID, ServiceRecord[] servRecord) { //for each service create connection if( servRecord!=null && servRecord.length>0 ){ print( "Service found" ); connectionURL = servRecord[0].getConnectionURL(ServiceRecord.NOAUTHENTICATE_NOENCRYPT,false); //connectionURL = servRecord[0].getConnectionURL(ServiceRecord.AUTHENTICATE_NOENCRYPT,false); } if ( connectionURL != null ) { showVoteList(); } } public void serviceSearchCompleted(int transID, int respCode) { //print( "serviceSearchCompleted" ); synchronized(lock){ lock.notify(); } } //This callback method will be called when the device discovery is completed. public void inquiryCompleted(int discType) { synchronized(lock){ lock.notify(); } switch (discType) { case DiscoveryListener.INQUIRY_COMPLETED : print("INQUIRY_COMPLETED"); break; case DiscoveryListener.INQUIRY_TERMINATED : print("INQUIRY_TERMINATED"); break; case DiscoveryListener.INQUIRY_ERROR : print("INQUIRY_ERROR"); break; default : print("Unknown Response Code"); break; } } //add message at form public void print( String msg ) { form.append( msg ); form.append( "\n\n" ); } public void pauseApp() { } public void destroyApp(boolean unconditional) { } //get agent :))) private DiscoveryAgent getAgent() { try { return LocalDevice.getLocalDevice().getDiscoveryAgent(); } catch (BluetoothStateException e) { throw new Error(e.getMessage()); } } private synchronized String getMessage( final String send ) { StreamConnection stream = null; DataInputStream in = null; DataOutputStream out = null; String r = null; try { //open connection stream = (StreamConnection) Connector.open(connectionURL); in = stream.openDataInputStream(); out = stream.openDataOutputStream(); out.writeUTF( send ); out.flush(); r = in.readUTF(); print( r ); in.close(); out.close(); stream.close(); return r; } catch (IOException e) { } finally { if (stream != null) { try { stream.close(); } catch (IOException e) { } } return r; } } private synchronized void showVoteList() { String votes = getMessage( "c_getVotes" ); voteList.append( votes, null ); select = new Command( "Select", Command.OK, 4 ); voteList.addCommand( select ); voteList.setCommandListener( this ); Display.getDisplay(this).setCurrent(voteList); } private synchronized void showVote( int index ) { String title = getMessage( "c_getVote_"+index ); vote.setTitle( title ); vote.append( "Yes", null ); vote.append( "No", null ); vote.setCommandListener( this ); Display.getDisplay(this).setCurrent(vote); } public void commandAction( Command c, Displayable d ) { if ( c == select && d == voteList ) { int index = voteList.getSelectedIndex(); print( ""+index ); showVote( index ); } } } Use BlueCove in this program. Server /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package javaapplication4; import java.io.*; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; import javax.bluetooth.*; import javax.microedition.io.*; import javaapplication4.Connect; /** * * @author ????????????? */ public class SampleSPPServer { protected static int endToken = 255; private static Lock lock=new ReentrantLock(); private static StreamConnection conn = null; private static StreamConnectionNotifier streamConnNotifier = null; private void startServer() throws IOException{ //Create a UUID for SPP UUID uuid = new UUID("1101", true); //Create the service url String connectionString = "btspp://localhost:" + uuid +";name=Sample SPP Server"; //open server url StreamConnectionNotifier streamConnNotifier = (StreamConnectionNotifier)Connector.open( connectionString ); while ( true ) { Connect ct = new Connect( streamConnNotifier.acceptAndOpen() ); ct.getMessage(); } } /** * @param args the command line arguments */ public static void main(String[] args) { //display local device address and name try { LocalDevice localDevice = LocalDevice.getLocalDevice(); localDevice.setDiscoverable(DiscoveryAgent.GIAC); System.out.println("Name: "+localDevice.getFriendlyName()); } catch( Throwable e ) { e.printStackTrace(); } SampleSPPServer sampleSPPServer=new SampleSPPServer(); try { //start server sampleSPPServer.startServer(); } catch( IOException e ) { e.printStackTrace(); } } } Connect /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package javaapplication4; import java.io.*; import java.util.concurrent.locks.Lock; import java.util.concurrent.locks.ReentrantLock; import javax.bluetooth.*; import javax.microedition.io.*; /** * * @author ????????????? */ public class Connect { private static DataInputStream in = null; private static DataOutputStream out = null; private static StreamConnection connection = null; private static Lock lock=new ReentrantLock(); public Connect( StreamConnection conn ) { connection = conn; } public synchronized void getMessage( ) { Thread t = new Thread() { public void run() { try { in = connection.openDataInputStream(); out = connection.openDataOutputStream(); String r = in.readUTF(); System.out.println("read:" + r); if ( r.equals( "c_getVotes" ) ) { out.writeUTF( "vote1" ); out.flush(); } if ( r.equals( "c_getVote_0" ) ) { out.writeUTF( "Vote1" ); out.flush(); } out.close(); in.close(); } catch (Throwable e) { } finally { if (in != null) { try { in.close(); } catch (IOException e) { } } try { connection.close(); } catch( IOException e ) { } } } }; t.start(); } }

    Read the article

  • PHP - My array returns NULL values when placed in a function, but works fine outside of the function

    - by orbit82
    Okay, let me see if I can explain this. I am making a newspaper WordPress theme. The theme pulls posts from categories. The front page shows multiple categories, organized as "newsboxes". Each post should show up only ONCE on the front page, even if said post is in two or more categories. To prevent posts from duplicating on the front page, I've created an array that keeps track of the individual post IDs. When a post FIRST shows up on the front page, its ID gets added to the array. Before looping through the posts for each category, the code first checks the array to see which posts have ALREADY been displayed. OK, so now remember how I said earlier that the front page shows multiple categories organized as "newsboxes"? Well, these newsboxes are called onto the front page using PHP includes. I have 6 newsboxes appearing on the front page, and the code to call them is EXACTLY the same. I didn't want to repeat the same code 6 times, so I put all of the inclusion code into a function. The function works, but the only problem is that it screws up the duplicate posts code I mentioned earlier. The posts all repeat. Running a var_dump on the $do_not_duplicate variable returns an array with null indices. Everything works PERFECTLY if I don't put the code inside a function, but once I do put them in a function it's like the arrays aren't even connecting with the posts. Here is the code with the arrays. The key variables in question here include $do_not_duplicate[] = $post-ID, $do_not_duplicate and 'post__not_in' = $do_not_duplicate <?php query_posts('cat='.$settings['cpress_top_story_category'].'&posts_per_page='.$settings['cpress_number_of_top_stories'].'');?> <?php if (have_posts()) : ?> <!--TOP STORY--> <div id="topStory"> <?php while ( have_posts() ) : the_post(); $do_not_duplicate[] = $post->ID; ?> <a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"><?php the_post_thumbnail('top-story-thumbnail'); ?></a> <h2 class="extraLargeHeadline"><a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"><?php the_title(); ?></a></h2> <div class="topStory_author"><?php cpress_show_post_author_byline(); ?></div> <div <?php post_class('topStory_entry') ?> id="post-<?php the_ID(); ?>"> <?php if($settings['cpress_excerpt_or_content_top_story_newsbox'] == "content") { the_content(); ?><a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php } else { the_excerpt(); ?><a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php }?> </div><!--/topStoryentry--> <div class="topStory_meta"><?php cpress_show_post_meta(); ?></div> <?php endwhile; wp_reset_query(); ?> <?php if(!$settings['cpress_hide_top_story_more_stories']) { ?> <!--More Top Stories--><div id="moreTopStories"> <?php $category_link = get_category_link(''.$settings['cpress_top_story_category'].''); ?> <?php if (have_posts()) : ?> <?php query_posts( array( 'cat' => ''.$settings['cpress_top_story_category'].'', 'posts_per_page' => ''.$settings['cpress_number_of_more_top_stories'].'', 'post__not_in' => $do_not_duplicate ) ); ?> <h4 class="moreStories"> <?php if($settings['cpress_make_top_story_more_stories_link']) { ?> <a href="<?php echo $category_link; ?>" title="<?php echo strip_tags($settings['cpress_top_story_more_stories_text']);?>"><?php echo strip_tags($settings['cpress_top_story_more_stories_text']);?></a><?php } else { echo strip_tags($settings['cpress_top_story_more_stories_text']); } ?> </h4> <ul> <?php while( have_posts() ) : the_post(); $do_not_duplicate[] = $post->ID; ?> <li><h2 class="mediumHeadline"><a href="<?php the_permalink() ?>" rel="bookmark" title="Permanent Link to <?php the_title_attribute(); ?>"><?php the_title(); ?></a></h2> <?php if(!$settings['cpress_hide_more_top_stories_excerpt']) { ?> <div <?php post_class('moreTopStory_postExcerpt') ?> id="post-<?php the_ID(); ?>"><?php if($settings['cpress_excerpt_or_content_top_story_newsbox'] == "content") { the_content(); ?><a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php } else { the_excerpt(); ?> <a href="<?php the_permalink(); ?>" title="<?php the_title_attribute(); ?>"><span class="read_more"><?php echo $settings['cpress_more_text']; ?></span></a> <?php }?> </div><?php } ?> <div class="moreTopStory_postMeta"><?php cpress_show_post_meta(); ?></div> </li> <?php endwhile; wp_reset_query(); ?> </ul> <?php endif;?> </div><!--/moreTopStories--> <?php } ?> <?php echo(var_dump($do_not_duplicate)); ?> </div><!--/TOP STORY--> <?php endif; ?> And here is the code that includes the newsboxes onto the front page. This is the code I'm trying to put into a function to avoid duplicating it 6 times on one page. function cpress_show_templatebitsf($tbit_num, $tbit_option) { global $tbit_path; global $shortname; $settings = get_option($shortname.'_options'); //display the templatebits (usually these will be sidebars) for ($i=1; $i<=$tbit_num; $i++) { $tbit = strip_tags($settings[$tbit_option .$i]); if($tbit !="") { include_once(TEMPLATEPATH . $tbit_path. $tbit.'.php'); } //if }//for loop unset($tbit_option); } I hope this makes sense. It's kind of a complex thing to explain but I've tried many things to fix it and have had no luck. I'm stumped. I'm hoping it's just some little thing I'm overlooking because it seems like it shouldn't be such a problem.

    Read the article

  • NLog Exception Details Renderer

    - by jtimperley
    Originally posted on: http://geekswithblogs.net/jtimperley/archive/2013/07/28/nlog-exception-details-renderer.aspxI recently switch from Microsoft's Enterprise Library Logging block to NLog.  In my opinion, NLog offers a simpler and much cleaner configuration section with better use of placeholders, complemented by custom variables. Despite this, I found one deficiency in my migration; I had lost the ability to simply render all details of an exception into our logs and notification emails. This is easily remedied by implementing a custom layout renderer. Start by extending 'NLog.LayoutRenderers.LayoutRenderer' and overriding the 'Append' method. using System.Text; using NLog; using NLog.Config; using NLog.LayoutRenderers;   [ThreadAgnostic] [LayoutRenderer(Name)] public class ExceptionDetailsRenderer : LayoutRenderer { public const string Name = "exceptiondetails";   protected override void Append(StringBuilder builder, LogEventInfo logEvent) { // Todo: Append details to StringBuilder } }   Now that we have a base layout renderer, we simply need to add the formatting logic to add exception details as well as inner exception details. This is done using reflection with some simple filtering for the properties that are already being rendered. I have added an additional 'Register' method, allowing the definition to be registered in code, rather than in configuration files. This complements by 'LogWrapper' class which standardizes writing log entries throughout my applications. using System; using System.Collections.Generic; using System.Linq; using System.Reflection; using System.Text; using NLog; using NLog.Config; using NLog.LayoutRenderers;   [ThreadAgnostic] [LayoutRenderer(Name)] public sealed class ExceptionDetailsRenderer : LayoutRenderer { public const string Name = "exceptiondetails"; private const string _Spacer = "======================================"; private List<string> _FilteredProperties;   private List<string> FilteredProperties { get { if (_FilteredProperties == null) { _FilteredProperties = new List<string> { "StackTrace", "HResult", "InnerException", "Data" }; }   return _FilteredProperties; } }   public bool LogNulls { get; set; }   protected override void Append(StringBuilder builder, LogEventInfo logEvent) { Append(builder, logEvent.Exception, false); }   private void Append(StringBuilder builder, Exception exception, bool isInnerException) { if (exception == null) { return; }   builder.AppendLine();   var type = exception.GetType(); if (isInnerException) { builder.Append("Inner "); }   builder.AppendLine("Exception Details:") .AppendLine(_Spacer) .Append("Exception Type: ") .AppendLine(type.ToString());   var bindingFlags = BindingFlags.Instance | BindingFlags.Public; var properties = type.GetProperties(bindingFlags); foreach (var property in properties) { var propertyName = property.Name; var isFiltered = FilteredProperties.Any(filter => String.Equals(propertyName, filter, StringComparison.InvariantCultureIgnoreCase)); if (isFiltered) { continue; }   var propertyValue = property.GetValue(exception, bindingFlags, null, null, null); if (propertyValue == null && !LogNulls) { continue; }   var valueText = propertyValue != null ? propertyValue.ToString() : "NULL"; builder.Append(propertyName) .Append(": ") .AppendLine(valueText); }   AppendStackTrace(builder, exception.StackTrace, isInnerException); Append(builder, exception.InnerException, true); }   private void AppendStackTrace(StringBuilder builder, string stackTrace, bool isInnerException) { if (String.IsNullOrEmpty(stackTrace)) { return; }   builder.AppendLine();   if (isInnerException) { builder.Append("Inner "); }   builder.AppendLine("Exception StackTrace:") .AppendLine(_Spacer) .AppendLine(stackTrace); }   public static void Register() { Type definitionType; var layoutRenderers = ConfigurationItemFactory.Default.LayoutRenderers; if (layoutRenderers.TryGetDefinition(Name, out definitionType)) { return; }   layoutRenderers.RegisterDefinition(Name, typeof(ExceptionDetailsRenderer)); LogManager.ReconfigExistingLoggers(); } } For brevity I have removed the Trace, Debug, Warn, and Fatal methods. They are modelled after the Info methods. As mentioned above, note how the log wrapper automatically registers our custom layout renderer reducing the amount of application configuration required. using System; using NLog;   public static class LogWrapper { static LogWrapper() { ExceptionDetailsRenderer.Register(); }   #region Log Methods   public static void Info(object toLog) { Log(toLog, LogLevel.Info); }   public static void Info(string messageFormat, params object[] parameters) { Log(messageFormat, parameters, LogLevel.Info); }   public static void Error(object toLog) { Log(toLog, LogLevel.Error); }   public static void Error(string message, Exception exception) { Log(message, exception, LogLevel.Error); }   private static void Log(string messageFormat, object[] parameters, LogLevel logLevel) { string message = parameters.Length == 0 ? messageFormat : string.Format(messageFormat, parameters); Log(message, (Exception)null, logLevel); }   private static void Log(object toLog, LogLevel logLevel, LogType logType = LogType.General) { if (toLog == null) { throw new ArgumentNullException("toLog"); }   if (toLog is Exception) { var exception = toLog as Exception; Log(exception.Message, exception, logLevel, logType); } else { var message = toLog.ToString(); Log(message, null, logLevel, logType); } }   private static void Log(string message, Exception exception, LogLevel logLevel, LogType logType = LogType.General) { if (exception == null && String.IsNullOrEmpty(message)) { return; }   var logger = GetLogger(logType); // Note: Using the default constructor doesn't set the current date/time var logInfo = new LogEventInfo(logLevel, logger.Name, message); logInfo.Exception = exception; logger.Log(logInfo); }   private static Logger GetLogger(LogType logType) { var loggerName = logType.ToString(); return LogManager.GetLogger(loggerName); }   #endregion   #region LogType private enum LogType { General } #endregion } The following configuration is similar to what is provided for each of my applications. The 'application' variable is all that differentiates the various applications in all of my environments, the rest has been standardized. Depending on your needs to tweak this configuration while developing and debugging, this section could easily be pushed back into code similar to the registering of our custom layout renderer.   <?xml version="1.0"?>   <configuration> <configSections> <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog"/> </configSections> <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <variable name="application" value="Example"/> <targets> <target type="EventLog" name="EventLog" source="${application}" log="${application}" layout="${message}${onexception: ${newline}${exceptiondetails}}"/> <target type="Mail" name="Email" smtpServer="smtp.example.local" from="[email protected]" to="[email protected]" subject="(${machinename}) ${application}: ${level}" body="Machine: ${machinename}${newline}Timestamp: ${longdate}${newline}Level: ${level}${newline}Message: ${message}${onexception: ${newline}${exceptiondetails}}"/> </targets> <rules> <logger name="*" minlevel="Debug" writeTo="EventLog" /> <logger name="*" minlevel="Error" writeTo="Email" /> </rules> </nlog> </configuration>   Now go forward, create your custom exceptions without concern for including their custom properties in your exception logs and notifications.

    Read the article

  • Maintaining shared service in ASP.NET MVC Application

    - by kazimanzurrashid
    Depending on the application sometimes we have to maintain some shared service throughout our application. Let’s say you are developing a multi-blog supported blog engine where both the controller and view must know the currently visiting blog, it’s setting , user information and url generation service. In this post, I will show you how you can handle this kind of case in most convenient way. First, let see the most basic way, we can create our PostController in the following way: public class PostController : Controller { public PostController(dependencies...) { } public ActionResult Index(string blogName, int? page) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublished(blog.Id, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetPublishedCount(blog.Id); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new IndexViewModel(urlResolver, user, blog, posts, count, page)); } public ActionResult Archive(string blogName, int? page, ArchiveDate archiveDate) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindArchived(blog.Id, archiveDate, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetArchivedCount(blog.Id, archiveDate); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new ArchiveViewModel(urlResolver, user, blog, posts, count, page, achiveDate)); } public ActionResult Tag(string blogName, string tagSlug, int? page) { BlogInfo blog = blogSerivce.FindByName(blogName); if (blog == null) { return new NotFoundResult(); } TagInfo tag = tagService.FindBySlug(blog.Id, tagSlug); if (tag == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublishedByTag(blog.Id, tag.Id, PagingCalculator.StartIndex(page, blog.PostPerPage), blog.PostPerPage); int count = postService.GetPublishedCountByTag(tag.Id); UserInfo user = null; if (HttpContext.User.Identity.IsAuthenticated) { user = userService.FindByName(HttpContext.User.Identity.Name); } return View(new TagViewModel(urlResolver, user, blog, posts, count, page, tag)); } } As you can see the above code heavily depends upon the current blog and the blog retrieval code is duplicated in all of the action methods, once the blog is retrieved the same blog is passed in the view model. Other than the blog the view also needs the current user and url resolver to render it properly. One way to remove the duplicate blog retrieval code is to create a custom model binder which converts the blog from a blog name and use the blog a parameter in the action methods instead of the string blog name, but it only helps the first half in the above scenario, the action methods still have to pass the blog, user and url resolver etc in the view model. Now lets try to improve the the above code, first lets create a new class which would contain the shared services, lets name it as BlogContext: public class BlogContext { public BlogInfo Blog { get; set; } public UserInfo User { get; set; } public IUrlResolver UrlResolver { get; set; } } Next, we will create an interface, IContextAwareService: public interface IContextAwareService { BlogContext Context { get; set; } } The idea is, whoever needs these shared services needs to implement this interface, in our case both the controller and the view model, now we will create an action filter which will be responsible for populating the context: public class PopulateBlogContextAttribute : FilterAttribute, IActionFilter { private static string blogNameRouteParameter = "blogName"; private readonly IBlogService blogService; private readonly IUserService userService; private readonly BlogContext context; public PopulateBlogContextAttribute(IBlogService blogService, IUserService userService, IUrlResolver urlResolver) { Invariant.IsNotNull(blogService, "blogService"); Invariant.IsNotNull(userService, "userService"); Invariant.IsNotNull(urlResolver, "urlResolver"); this.blogService = blogService; this.userService = userService; context = new BlogContext { UrlResolver = urlResolver }; } public static string BlogNameRouteParameter { [DebuggerStepThrough] get { return blogNameRouteParameter; } [DebuggerStepThrough] set { blogNameRouteParameter = value; } } public void OnActionExecuting(ActionExecutingContext filterContext) { string blogName = (string) filterContext.Controller.ValueProvider.GetValue(BlogNameRouteParameter).ConvertTo(typeof(string), Culture.Current); if (!string.IsNullOrWhiteSpace(blogName)) { context.Blog = blogService.FindByName(blogName); } if (context.Blog == null) { filterContext.Result = new NotFoundResult(); return; } if (filterContext.HttpContext.User.Identity.IsAuthenticated) { context.User = userService.FindByName(filterContext.HttpContext.User.Identity.Name); } IContextAwareService controller = filterContext.Controller as IContextAwareService; if (controller != null) { controller.Context = context; } } public void OnActionExecuted(ActionExecutedContext filterContext) { Invariant.IsNotNull(filterContext, "filterContext"); if ((filterContext.Exception == null) || filterContext.ExceptionHandled) { IContextAwareService model = filterContext.Controller.ViewData.Model as IContextAwareService; if (model != null) { model.Context = context; } } } } As you can see we are populating the context in the OnActionExecuting, which executes just before the controllers action methods executes, so by the time our action methods executes the context is already populated, next we are are assigning the same context in the view model in OnActionExecuted method which executes just after we set the  model and return the view in our action methods. Now, lets change the view models so that it implements this interface: public class IndexViewModel : IContextAwareService { // More Codes } public class ArchiveViewModel : IContextAwareService { // More Codes } public class TagViewModel : IContextAwareService { // More Codes } and the controller: public class PostController : Controller, IContextAwareService { public PostController(dependencies...) { } public BlogContext Context { get; set; } public ActionResult Index(int? page) { IEnumerable<PostInfo> posts = postService.FindPublished(Context.Blog.Id, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetPublishedCount(Context.Blog.Id); return View(new IndexViewModel(posts, count, page)); } public ActionResult Archive(int? page, ArchiveDate archiveDate) { IEnumerable<PostInfo> posts = postService.FindArchived(Context.Blog.Id, archiveDate, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetArchivedCount(Context.Blog.Id, archiveDate); return View(new ArchiveViewModel(posts, count, page, achiveDate)); } public ActionResult Tag(string blogName, string tagSlug, int? page) { TagInfo tag = tagService.FindBySlug(Context.Blog.Id, tagSlug); if (tag == null) { return new NotFoundResult(); } IEnumerable<PostInfo> posts = postService.FindPublishedByTag(Context.Blog.Id, tag.Id, PagingCalculator.StartIndex(page, Context.Blog.PostPerPage), Context.Blog.PostPerPage); int count = postService.GetPublishedCountByTag(tag.Id); return View(new TagViewModel(posts, count, page, tag)); } } Now, the last thing where we have to glue everything, I will be using the AspNetMvcExtensibility to register the action filter (as there is no better way to inject the dependencies in action filters). public class RegisterFilters : RegisterFiltersBase { private static readonly Type controllerType = typeof(Controller); private static readonly Type contextAwareType = typeof(IContextAwareService); protected override void Register(IFilterRegistry registry) { TypeCatalog controllers = new TypeCatalogBuilder() .Add(GetType().Assembly) .Include(type => controllerType.IsAssignableFrom(type) && contextAwareType.IsAssignableFrom(type)); registry.Register<PopulateBlogContextAttribute>(controllers); } } Thoughts and Comments?

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITY SELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) CREATE TABLE EmployeeLog ( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()) ) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • Returning Identity Value in SQL Server: @@IDENTITY Vs SCOPE_IDENTITY Vs IDENT_CURRENT

    - by Arefin Ali
    We have some common misconceptions on returning the last inserted identity value from tables. To return the last inserted identity value we have options to use @@IDENTITY or SCOPE_IDENTITY or IDENT_CURRENT function depending on the requirement but it will be a real mess if anybody uses anyone of these functions without knowing exact purpose. So here I want to share my thoughts on this. @@IDENTITY, SCOPE_IDENTITY and IDENT_CURRENT are almost similar functions in terms of returning identity value. They all return values that are inserted into an identity column. Earlier in SQL Server 7 we used to use @@IDENTITY to return the last inserted identity value because those days we don’t have functions like SCOPE_IDENTITY or IDENT_CURRENT but now we have these three functions. So let’s check out which one responsible for what. IDENT_CURRENT returns the last inserted identity value in a particular table. It never depends on a connection or the scope of the insert statement. IDENT_CURRENT function takes a table name as parameter. Here is the syntax to get the last inserted identity value in a particular table using IDENT_CURRENT function. SELECT IDENT_CURRENT('Employee') Both the @@IDENTITY and SCOPE_IDENTITY return the last inserted identity value created in any table in the current session. But there is little difference between these two i.e. SCOPE_IDENTITY returns value inserted only within the current scope whereas @@IDENTITY is not limited to any particular scope. Here are the syntaxes to get the last inserted identity value using these functions SELECT @@IDENTITYSELECT SCOPE_IDENTITY() Now let’s have a look at the following example. Suppose I have two tables called Employee and EmployeeLog. CREATE TABLE Employee( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE()))CREATE TABLE EmployeeLog( EmpId NUMERIC(18, 0) IDENTITY(1,1) NOT NULL, EmpName VARCHAR(100) NOT NULL, EmpSal FLOAT NOT NULL, DateOfJoining DATETIME NOT NULL DEFAULT(GETDATE())) I have an insert trigger defined on the table Employee which inserts a new record in the EmployeeLog whenever a record insert in the Employee table. So Suppose I insert a new record in the Employee table using following statement: INSERT INTO Employee (EmpName,EmpSal) VALUES ('Arefin','1') The trigger will be fired automatically and insert a record in EmployeeLog. Here the scope of the insert statement and the trigger are different. In this situation if I retrieve last inserted identity value using @@IDENTITY, it will simply return the identity value from the EmployeeLog because it’s not limited to a particular scope. Now if I want to get the Employee table’s identity value then I need to use SCOPE_IDENTITY in this scenario. So the moral is always use SCOPE_IDENTITY to return the identity value of a recently created record in a sql statement or stored procedure. It’s safe and ensures bug free code.

    Read the article

  • SQL SERVER – Shrinking Database is Bad – Increases Fragmentation – Reduces Performance

    - by pinaldave
    Earlier, I had written two articles related to Shrinking Database. I wrote about why Shrinking Database is not good. SQL SERVER – SHRINKDATABASE For Every Database in the SQL Server SQL SERVER – What the Business Says Is Not What the Business Wants I received many comments on Why Database Shrinking is bad. Today we will go over a very interesting example that I have created for the same. Here are the quick steps of the example. Create a test database Create two tables and populate with data Check the size of both the tables Size of database is very low Check the Fragmentation of one table Fragmentation will be very low Truncate another table Check the size of the table Check the fragmentation of the one table Fragmentation will be very low SHRINK Database Check the size of the table Check the fragmentation of the one table Fragmentation will be very HIGH REBUILD index on one table Check the size of the table Size of database is very HIGH Check the fragmentation of the one table Fragmentation will be very low Here is the script for the same. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO Let us check the table size and fragmentation. Now let us TRUNCATE the table and check the size and Fragmentation. USE MASTER GO CREATE DATABASE ShrinkIsBed GO USE ShrinkIsBed GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Create FirstTable CREATE TABLE FirstTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_FirstTable_ID] ON FirstTable ( [ID] ASC ) ON [PRIMARY] GO -- Create SecondTable CREATE TABLE SecondTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO -- Create Clustered Index on ID CREATE CLUSTERED INDEX [IX_SecondTable_ID] ON SecondTable ( [ID] ASC ) ON [PRIMARY] GO -- Insert One Hundred Thousand Records INSERT INTO FirstTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Insert One Hundred Thousand Records INSERT INTO SecondTable (ID,FirstName,LastName,City) SELECT TOP 100000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 1 THEN 'New York' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 5 THEN 'San Marino' WHEN ROW_NUMBER() OVER (ORDER BY a.name)%10 = 3 THEN 'Los Angeles' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can clearly see that after TRUNCATE, the size of the database is not reduced and it is still the same as before TRUNCATE operation. After the Shrinking database operation, we were able to reduce the size of the database. If you notice the fragmentation, it is considerably high. The major problem with the Shrink operation is that it increases fragmentation of the database to very high value. Higher fragmentation reduces the performance of the database as reading from that particular table becomes very expensive. One of the ways to reduce the fragmentation is to rebuild index on the database. Let us rebuild the index and observe fragmentation and database size. -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REBUILD GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can notice that after rebuilding, Fragmentation reduces to a very low value (almost same to original value); however the database size increases way higher than the original. Before rebuilding, the size of the database was 5 MB, and after rebuilding, it is around 20 MB. Regular rebuilding the index is rebuild in the same user database where the index is placed. This usually increases the size of the database. Look at irony of the Shrinking database. One person shrinks the database to gain space (thinking it will help performance), which leads to increase in fragmentation (reducing performance). To reduce the fragmentation, one rebuilds index, which leads to size of the database to increase way more than the original size of the database (before shrinking). Well, by Shrinking, one did not gain what he was looking for usually. Rebuild indexing is not the best suggestion as that will create database grow again. I have always remembered the excellent post from Paul Randal regarding Shrinking the database is bad. I suggest every one to read that for accuracy and interesting conversation. Let us run following script where we Shrink the database and REORGANIZE. -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Shrink the Database DBCC SHRINKDATABASE (ShrinkIsBed); GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO -- Rebuild Index on FirstTable ALTER INDEX IX_SecondTable_ID ON SecondTable REORGANIZE GO -- Name of the Database and Size SELECT name, (size*8) Size_KB FROM sys.database_files GO -- Check Fragmentations in the database SELECT avg_fragmentation_in_percent, fragment_count FROM sys.dm_db_index_physical_stats (DB_ID(), OBJECT_ID('SecondTable'), NULL, NULL, 'LIMITED') GO You can see that REORGANIZE does not increase the size of the database or remove the fragmentation. Again, I no way suggest that REORGANIZE is the solution over here. This is purely observation using demo. Read the blog post of Paul Randal. Following script will clean up the database -- Clean up USE MASTER GO ALTER DATABASE ShrinkIsBed SET SINGLE_USER WITH ROLLBACK IMMEDIATE GO DROP DATABASE ShrinkIsBed GO There are few valid cases of the Shrinking database as well, but that is not covered in this blog post. We will cover that area some other time in future. Additionally, one can rebuild index in the tempdb as well, and we will also talk about the same in future. Brent has written a good summary blog post as well. Are you Shrinking your database? Well, when are you going to stop Shrinking it? Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Index, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Grandparent – Parent – Child Reports in SQL Developer

    - by thatjeffsmith
    You’ll never see one of these family stickers on my car, but I promise not to judge…much. Parent – Child reports are pretty straightforward in Oracle SQL Developer. You have a ‘parent’ report, and then one or more ‘child’ reports which are based off of a value in a selected row or value from the parent. If you need a quick tutorial to get up to speed on the subject, go ahead and take 5 minutes Shortly before I left for vacation 2 weeks agao, I got an interesting question from one of my Twitter Followers: @thatjeffsmith any luck with the #Oracle awr reports in #SQLDeveloper?This is easy with multi generation parent>child Done in #dbvisualizer — Ronald Rood (@Ik_zelf) August 26, 2012 Now that I’m back from vacation, I can tell Ronald and everyone else that the answer is ‘Yes!’ And here’s how Time to Get Out Your XML Editor Don’t have one? That’s OK, SQL Developer can edit XML files. While the Reporting interface doesn’t surface the ability to create multi-generational reports, the underlying code definitely supports it. We just need to hack away at the XML that powers a report. For this example I’m going to start simple. A query that brings back DEPARTMENTs, then EMPLOYEES, then JOBs. We can build the first two parts of the report using the report editor. A Parent-Child report in Oracle SQL Developer (Departments – Employees) Save the Report to XML Once you’ve generated the XML file, open it with your favorite XML editor. For this example I’ll be using the build-it XML editor in SQL Developer. SQL Developer Reports in their raw XML glory! Right after the PDF element in the XML document, we can start a new ‘child’ report by inserting a DISPLAY element. I just copied and pasted the existing ‘display’ down so I wouldn’t have to worry about screwing anything up. Note I also needed to change the ‘master’ name so it wouldn’t confuse SQL Developer when I try to import/open a report that has the same name. Also I needed to update the binds tags to reflect the names from the child versus the original parent report. This is pretty easy to figure out on your own actually – I mean I’m no real developer and I got it pretty quick. <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="92857fce-0139-1000-8006-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[Grandparent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.departments]]></sql> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Parent]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.employees where department_id = EPARTMENT_ID]]></sql> <binds> <bind id="DEPARTMENT_ID"> <prompt><![CDATA[DEPARTMENT_ID]]></prompt> <tooltip><![CDATA[DEPARTMENT_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Child]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[select * from hr.jobs where job_id = :JOB_ID]]></sql> <binds> <bind id="JOB_ID"> <prompt><![CDATA[JOB_ID]]></prompt> <tooltip><![CDATA[JOB_ID]]></tooltip> <value><![CDATA[NULL_VALUE]]></value> </bind> </binds> </query> <pdf version="VERSION_1_7" compression="CONTENT"> <docproperty title="" author="" subject="" keywords="" /> <cell toppadding="2" bottompadding="2" leftpadding="2" rightpadding="2" horizontalalign="LEFT" verticalalign="TOP" wrap="true" /> <column> <heading font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="FIRST_PAGE" /> <footing font="Courier" size="10" style="NORMAL" color="-16777216" rowshading="-1" labeling="NONE" /> <blob blob="NONE" zip="false" /> </column> <table font="Courier" size="10" style="NORMAL" color="-16777216" userowshading="false" oddrowshading="-1" evenrowshading="-1" showborders="true" spacingbefore="12" spacingafter="12" horizontalalign="LEFT" /> <header enable="false" generatedate="false"> <data> null </data> </header> <footer enable="false" generatedate="false"> <data value="null" /> </footer> <security enable="false" useopenpassword="false" openpassword="" encryption="EXCLUDE_METADATA"> <permission enable="false" permissionpassword="" allowcopying="true" allowprinting="true" allowupdating="false" allowaccessdevices="true" /> </security> <pagesetup papersize="LETTER" orientation="1" measurement="in" margintop="1.0" marginbottom="1.0" marginleft="1.0" marginright="1.0" /> </pdf> </display> </display> </display> </displays> Save the file and ‘Open Report…’ You’ll see your new report name in the tree. You just need to double-click it to open it. Here’s what it looks like running A 3 generation family Now Let’s Build an AWR Text Report Ronald wanted to have the ability to query AWR snapshots and generate the AWR reports. That requires a few inputs, including a START and STOP snapshot ID. That basically tells AWR what time period to use for generating the report. And here’s where it gets tricky. We’ll need to use aliases for the SNAP_ID column. Since we’re using the same column name from 2 different queries, we need to use different bind variables. Fortunately for us, SQL Developer’s clever enough to use the column alias as the BIND. Here’s what I mean: Grandparent Query SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc Parent Query SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc And here’s where it gets even trickier – you can’t reference a bind from outside the parent query. My grandchild report can’t reference a value from the grandparent report. So I just carry the selected value down to the parent. In my parent query SELECT you see the ‘:START1′ at the end? That’s making that value available to me when I use it in my grandchild query. To complicate things a bit further, I can’t have a column name with a ‘:’ in it, or SQL Developer will get confused when I try to reference the value of the variable with the ‘:’ – and ‘::Name’ doesn’t work. But that’s OK, just alias it. Grandchild Query Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1)); Ok, and the last trick – I hard-coded my report to use my database’s DB_ID and INST_ID into the AWR package call. Now a smart person could figure out a way to make that work on any database, but I got lazy and and ran out of time. But this should be far enough for you to take it from here. Here’s what my report looks like now: Caution: don’t run this if you haven’t licensed Enterprise Edition with Diagnostic Pack. The Raw XML for this AWR Report <?xml version="1.0" encoding="UTF-8" ?> <displays> <display id="927ba96c-0139-1000-8001-7f0000015340" type="" style="Table" enable="true"> <name><![CDATA[AWR Start Stop Report Final]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id start1, begin_interval_time, end_interval_time FROM dba_hist_snapshot ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[Stop SNAP_ID]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[SELECT snap_id stop1, begin_interval_time, end_interval_time, :START1 carry FROM dba_hist_snapshot WHERE snap_id > :START1 ORDER BY 1 asc]]></sql> </query> <display id="null" type="" style="Table" enable="true"> <name><![CDATA[AWR Report]]></name> <description><![CDATA[]]></description> <tooltip><![CDATA[]]></tooltip> <drillclass><![CDATA[null]]></drillclass> <CustomValues> <TYPE>horizontal</TYPE> </CustomValues> <query> <sql><![CDATA[Select Output From Table(Dbms_Workload_Repository.Awr_Report_Text(1298953802, 1,:CARRY, :STOP1 ))]]></sql> </query> </display> </display> </display> </displays> Should We Build Support for Multiple Levels of Reports into the User Interface? Let us know! A comment here or a suggestion on our SQL Developer Exchange might help your case!

    Read the article

  • Skype crash immediatly after launch

    - by K_naille
    when I'm launch skype, it crashes immediatly. Error: mathieu@mathieu-desktop:~$ skype `menu_proxy_module_load': skype: undefined symbol: menu_proxy_module_load (skype:10442): Gtk-WARNING **: Failed to load type module: (null) `menu_proxy_module_load': skype: undefined symbol: menu_proxy_module_load (skype:10442): Gtk-WARNING **: Failed to load type module: (null) `menu_proxy_module_load': skype: undefined symbol: menu_proxy_module_load (skype:10442): Gtk-WARNING **: Failed to load type module: (null) `menu_proxy_module_load': skype: undefined symbol: menu_proxy_module_load (skype:10442): Gtk-WARNING **: Failed to load type module: (null) Abandon (core dumped) mathieu@mathieu-desktop:~$ Can you help me? Thank.

    Read the article

  • C#/.NET Little Wonders: Using &lsquo;default&rsquo; to Get Default Values

    - by James Michael Hare
    Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. Today’s little wonder is another of those small items that can help a lot in certain situations, especially when writing generics.  In particular, it is useful in determining what the default value of a given type would be. The Problem: what’s the default value for a generic type? There comes a time when you’re writing generic code where you may want to set an item of a given generic type.  Seems simple enough, right?  We’ll let’s see! Let’s say we want to query a Dictionary<TKey, TValue> for a given key and get back the value, but if the key doesn’t exist, we’d like a default value instead of throwing an exception. So, for example, we might have a the following dictionary defined: 1: var lookup = new Dictionary<int, string> 2: { 3: { 1, "Apple" }, 4: { 2, "Orange" }, 5: { 3, "Banana" }, 6: { 4, "Pear" }, 7: { 9, "Peach" } 8: }; And using those definitions, perhaps we want to do something like this: 1: // assume a default 2: string value = "Unknown"; 3:  4: // if the item exists in dictionary, get its value 5: if (lookup.ContainsKey(5)) 6: { 7: value = lookup[5]; 8: } But that’s inefficient, because then we’re double-hashing (once for ContainsKey() and once for the indexer).  Well, to avoid the double-hashing, we could use TryGetValue() instead: 1: string value; 2:  3: // if key exists, value will be put in value, if not default it 4: if (!lookup.TryGetValue(5, out value)) 5: { 6: value = "Unknown"; 7: } But the “flow” of using of TryGetValue() can get clunky at times when you just want to assign either the value or a default to a variable.  Essentially it’s 3-ish lines (depending on formatting) for 1 assignment.  So perhaps instead we’d like to write an extension method to support a cleaner interface that will return a default if the item isn’t found: 1: public static class DictionaryExtensions 2: { 3: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 4: TKey key, TValue defaultIfNotFound) 5: { 6: TValue value; 7:  8: // value will be the result or the default for TValue 9: if (!dict.TryGetValue(key, out value)) 10: { 11: value = defaultIfNotFound; 12: } 13:  14: return value; 15: } 16: } 17:  So this creates an extension method on Dictionary<TKey, TValue> that will attempt to get a value using the given key, and will return the defaultIfNotFound as a stand-in if the key does not exist. This code compiles, fine, but what if we would like to go one step further and allow them to specify a default if not found, or accept the default for the type?  Obviously, we could overload the method to take the default or not, but that would be duplicated code and a bit heavy for just specifying a default.  It seems reasonable that we could set the not found value to be either the default for the type, or the specified value. So what if we defaulted the type to null? 1: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 2: TKey key, TValue defaultIfNotFound = null) // ... No, this won’t work, because only reference types (and Nullable<T> wrapped types due to syntactical sugar) can be assigned to null.  So what about a calling parameterless constructor? 1: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 2: TKey key, TValue defaultIfNotFound = new TValue()) // ... No, this won’t work either for several reasons.  First, we’d expect a reference type to return null, not an “empty” instance.  Secondly, not all reference types have a parameter-less constructor (string for example does not).  And finally, a constructor cannot be determined at compile-time, while default values can. The Solution: default(T) – returns the default value for type T Many of us know the default keyword for its uses in switch statements as the default case.  But it has another use as well: it can return us the default value for a given type.  And since it generates the same defaults that default field initialization uses, it can be determined at compile-time as well. For example: 1: var x = default(int); // x is 0 2:  3: var y = default(bool); // y is false 4:  5: var z = default(string); // z is null 6:  7: var t = default(TimeSpan); // t is a TimeSpan with Ticks == 0 8:  9: var n = default(int?); // n is a Nullable<int> with HasValue == false Notice that for numeric types the default is 0, and for reference types the default is null.  In addition, for struct types, the value is a default-constructed struct – which simply means a struct where every field has their default value (hence 0 Ticks for TimeSpan, etc.). So using this, we could modify our code to this: 1: public static class DictionaryExtensions 2: { 3: public static TValue GetValueOrDefault<TKey, TValue>(this Dictionary<TKey, TValue> dict, 4: TKey key, TValue defaultIfNotFound = default(TValue)) 5: { 6: TValue value; 7:  8: // value will be the result or the default for TValue 9: if (!dict.TryGetValue(key, out value)) 10: { 11: value = defaultIfNotFound; 12: } 13:  14: return value; 15: } 16: } Now, if defaultIfNotFound is unspecified, it will use default(TValue) which will be the default value for whatever value type the dictionary holds.  So let’s consider how we could use this: 1: lookup.GetValueOrDefault(1); // returns “Apple” 2:  3: lookup.GetValueOrDefault(5); // returns null 4:  5: lookup.GetValueOrDefault(5, “Unknown”); // returns “Unknown” 6:  Again, do not confuse a parameter-less constructor with the default value for a type.  Remember that the default value for any type is the compile-time default for any instance of that type (0 for numeric, false for bool, null for reference types, and struct will all default fields for struct).  Consider the difference: 1: // both zero 2: int i1 = default(int); 3: int i2 = new int(); 4:  5: // both “zeroed” structs 6: var dt1 = default(DateTime); 7: var dt2 = new DateTime(); 8:  9: // sb1 is null, sb2 is an “empty” string builder 10: var sb1 = default(StringBuilder()); 11: var sb2 = new StringBuilder(); So in the above code, notice that the value types all resolve the same whether using default or parameter-less construction.  This is because a value type is never null (even Nullable<T> wrapped types are never “null” in a reference sense), they will just by default contain fields with all default values. However, for reference types, the default is null and not a constructed instance.  Also it should be noted that not all classes have parameter-less constructors (string, for instance, doesn’t have one – and doesn’t need one). Summary Whenever you need to get the default value for a type, especially a generic type, consider using the default keyword.  This handy word will give you the default value for the given type at compile-time, which can then be used for initialization, optional parameters, etc. Technorati Tags: C#,CSharp,.NET,Little Wonders,default

    Read the article

  • Voice echo in UDP based voice transmission [closed]

    - by Meherzad
    I have coded a java application for voice transmission between to ip in LAN. Here the code. public static Boolean flag= true; public static Boolean recFlag=true; DatagramSocket UDPSocket=null; AudioFormat format = null; TargetDataLine microphone=null; byte[] buffer=null; DatagramPacket UDPPacket=null; public void startChat(String ipAddress){ try{ buffer = new byte[1000]; UDPSocket=new DatagramSocket(1987); Thread th=new Thread(new Listener()); th.start(); microphone = AudioSystem.getTargetDataLine(format); format= new AudioFormat(8000.0f, 16, 1, true, true); UDPPacket = new DatagramPacket(buffer, buffer.length, InetAddress.getByName(ipAddress), 1988); microphone.open(format); microphone.start(); while (flag) { microphone.read(buffer, 0, buffer.length); UDPSocket.send(UDPPacket); } } catch(Exception e){ System.out.println(" ssss "+e.getMessage()); } } public class Listener extends Thread{ byte[] buff=new byte[1000]; DatagramSocket UDPSocket1=null; DatagramPacket recPacket=null; DataLine.Info info = new DataLine.Info(SourceDataLine.class, format); SourceDataLine line=null; @Override public void run(){ try{ UDPSocket1=new DatagramSocket(1988); format= new AudioFormat(8000.0f, 16, 1, true, true); line = (SourceDataLine) AudioSystem.getLine(info); line.open(format); line.start(); } catch(Exception e){ System.out.println("list "+ e.getMessage()); } recPacket=new DatagramPacket(buff, buff.length); while(recFlag){ try{ UDPSocket1.receive(recPacket); buff = (byte[])recPacket.getData(); line.write(buff, 0, buff.length); } catch(Exception e){ System.out.println("errr "+e.getMessage()); } } line.drain(); line.close(); } } Main problem which I am facing that I am getting only echo of my own voice. I am unable to hear voice from the other end only I am hearing is my own voice. Please suggest any solution.

    Read the article

  • SQL Server script commands to check if object exists and drop it

    - by deadlydog
    Over the past couple years I’ve been keeping track of common SQL Server script commands that I use so I don’t have to constantly Google them.  Most of them are how to check if a SQL object exists before dropping it.  I thought others might find these useful to have them all in one place, so here you go: 1: --=============================== 2: -- Create a new table and add keys and constraints 3: --=============================== 4: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 5: BEGIN 6: CREATE TABLE [dbo].[TableName] 7: ( 8: [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) 9: [ColumnName2] INT NULL, 10: [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') 11: ) 12: 13: -- Add the table's primary key 14: ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED 15: ( 16: [ColumnName1], 17: [ColumnName2] 18: ) 19: 20: -- Add a foreign key constraint 21: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY 22: ( 23: [ColumnName1], 24: [ColumnName2] 25: ) 26: REFERENCES [dbo].[Table2Name] 27: ( 28: [OtherColumnName1], 29: [OtherColumnName2] 30: ) 31: 32: -- Add indexes on columns that are often used for retrieval 33: CREATE INDEX IN_ColumnNames ON [dbo].[TableName] 34: ( 35: [ColumnName2], 36: [ColumnName3] 37: ) 38: 39: -- Add a check constraint 40: ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) 41: END 42: 43: --=============================== 44: -- Add a new column to an existing table 45: --=============================== 46: IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 47: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 48: BEGIN 49: ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) 50: 51: -- Add a description extended property to the column to specify what its purpose is. 52: EXEC sys.sp_addextendedproperty @name=N'MS_Description', 53: @value = N'Add column comments here, describing what this column is for.' , 54: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 55: @level1name = N'TableName', @level2type=N'COLUMN', 56: @level2name = N'ColumnName' 57: END 58: 59: --=============================== 60: -- Drop a table 61: --=============================== 62: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') 63: BEGIN 64: DROP TABLE [dbo].[TableName] 65: END 66: 67: --=============================== 68: -- Drop a view 69: --=============================== 70: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') 71: BEGIN 72: DROP VIEW [dbo].[ViewName] 73: END 74: 75: --=============================== 76: -- Drop a column 77: --=============================== 78: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' 79: AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') 80: BEGIN 81: 82: -- If the column has an extended property, drop it first. 83: IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', 84: N'TableName', N'COLUMN', N'ColumnName') 85: BEGIN 86: EXEC sys.sp_dropextendedproperty @name=N'MS_Description', 87: @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', 88: @level1name = N'TableName', @level2type=N'COLUMN', 89: @level2name = N'ColumnName' 90: END 91: 92: ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] 93: END 94: 95: --=============================== 96: -- Drop Primary key constraint 97: --=============================== 98: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' 99: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') 100: BEGIN 101: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] 102: END 103: 104: --=============================== 105: -- Drop Foreign key constraint 106: --=============================== 107: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' 108: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') 109: BEGIN 110: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] 111: END 112: 113: --=============================== 114: -- Drop Unique key constraint 115: --=============================== 116: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 117: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') 118: BEGIN 119: ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] 120: END 121: 122: --=============================== 123: -- Drop Check constraint 124: --=============================== 125: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' 126: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') 127: BEGIN 128: ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] 129: END 130: 131: --=============================== 132: -- Drop a column's Default value constraint 133: --=============================== 134: DECLARE @ConstraintName VARCHAR(100) 135: SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id 136: WHERE s.xtype='d' AND c.cdefault=s.id 137: AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') 138: 139: IF @ConstraintName IS NOT NULL 140: BEGIN 141: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 142: END 143: 144: --=============================== 145: -- Example of how to drop dynamically named Unique constraint 146: --=============================== 147: DECLARE @ConstraintName VARCHAR(100) 148: SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS 149: WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 150: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') 151: 152: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' 153: AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) 154: BEGIN 155: EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) 156: END 157: 158: --=============================== 159: -- Check for and drop a temp table 160: --=============================== 161: IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName 162: 163: --=============================== 164: -- Drop a stored procedure 165: --=============================== 166: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND 167: ROUTINE_NAME = 'StoredProcedureName') 168: BEGIN 169: DROP PROCEDURE [dbo].[StoredProcedureName] 170: END 171: 172: --=============================== 173: -- Drop a UDF 174: --=============================== 175: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND 176: ROUTINE_NAME = 'UDFName') 177: BEGIN 178: DROP FUNCTION [dbo].[UDFName] 179: END 180: 181: --=============================== 182: -- Drop an Index 183: --=============================== 184: IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') 185: BEGIN 186: DROP INDEX TableName.IndexName 187: END 188: 189: --=============================== 190: -- Drop a Schema 191: --=============================== 192: IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') 193: BEGIN 194: EXEC('DROP SCHEMA SchemaName') 195: END And here’s the same code, just not in the little code view window so that you don’t have to scroll it.--=============================== -- Create a new table and add keys and constraints --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN CREATE TABLE [dbo].[TableName]  ( [ColumnName1] INT NOT NULL, -- To have a field auto-increment add IDENTITY(1,1) [ColumnName2] INT NULL, [ColumnName3] VARCHAR(30) NOT NULL DEFAULT('') ) -- Add the table's primary key ALTER TABLE [dbo].[TableName] ADD CONSTRAINT [PK_TableName] PRIMARY KEY NONCLUSTERED ( [ColumnName1],  [ColumnName2] ) -- Add a foreign key constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [FK_Name] FOREIGN KEY ( [ColumnName1],  [ColumnName2] ) REFERENCES [dbo].[Table2Name]  ( [OtherColumnName1],  [OtherColumnName2] ) -- Add indexes on columns that are often used for retrieval CREATE INDEX IN_ColumnNames ON [dbo].[TableName] ( [ColumnName2], [ColumnName3] ) -- Add a check constraint ALTER TABLE [dbo].[TableName] WITH CHECK ADD CONSTRAINT [CH_Name] CHECK (([ColumnName] >= 0.0000)) END --=============================== -- Add a new column to an existing table --=============================== IF NOT EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN ALTER TABLE [dbo].[TableName] ADD [ColumnName] INT NOT NULL DEFAULT(0) -- Add a description extended property to the column to specify what its purpose is. EXEC sys.sp_addextendedproperty @name=N'MS_Description',  @value = N'Add column comments here, describing what this column is for.' ,  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END --=============================== -- Drop a table --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLES WHERE TABLE_NAME = 'TableName' AND TABLE_SCHEMA='dbo') BEGIN DROP TABLE [dbo].[TableName] END --=============================== -- Drop a view --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_NAME = 'ViewName' AND TABLE_SCHEMA='dbo') BEGIN DROP VIEW [dbo].[ViewName] END --=============================== -- Drop a column --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.COLUMNS where TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND COLUMN_NAME = 'ColumnName') BEGIN -- If the column has an extended property, drop it first. IF EXISTS (SELECT * FROM sys.fn_listExtendedProperty(N'MS_Description', N'SCHEMA', N'dbo', N'Table', N'TableName', N'COLUMN', N'ColumnName') BEGIN EXEC sys.sp_dropextendedproperty @name=N'MS_Description',  @level0type=N'SCHEMA',@level0name=N'dbo', @level1type=N'TABLE', @level1name = N'TableName', @level2type=N'COLUMN', @level2name = N'ColumnName' END ALTER TABLE [dbo].[TableName] DROP COLUMN [ColumnName] END --=============================== -- Drop Primary key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='PRIMARY KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'PK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [PK_Name] END --=============================== -- Drop Foreign key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='FOREIGN KEY' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'FK_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [FK_Name] END --=============================== -- Drop Unique key constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'UNI_Name') BEGIN ALTER TABLE [dbo].[TableNames] DROP CONSTRAINT [UNI_Name] END --=============================== -- Drop Check constraint --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='CHECK' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = 'CH_Name') BEGIN ALTER TABLE [dbo].[TableName] DROP CONSTRAINT [CH_Name] END --=============================== -- Drop a column's Default value constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 s.name FROM sys.sysobjects s JOIN sys.syscolumns c ON s.parent_obj=c.id WHERE s.xtype='d' AND c.cdefault=s.id  AND parent_obj = OBJECT_ID('TableName') AND c.name ='ColumnName') IF @ConstraintName IS NOT NULL BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Example of how to drop dynamically named Unique constraint --=============================== DECLARE @ConstraintName VARCHAR(100) SET @ConstraintName = (SELECT TOP 1 CONSTRAINT_NAME FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS  WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME LIKE 'FirstPartOfConstraintName%') IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.TABLE_CONSTRAINTS WHERE CONSTRAINT_TYPE='UNIQUE' AND TABLE_SCHEMA='dbo' AND TABLE_NAME = 'TableName' AND CONSTRAINT_NAME = @ConstraintName) BEGIN EXEC ('ALTER TABLE [dbo].[TableName] DROP CONSTRAINT ' + @ConstraintName) END --=============================== -- Check for and drop a temp table --=============================== IF OBJECT_ID('tempdb..#TableName') IS NOT NULL DROP TABLE #TableName --=============================== -- Drop a stored procedure --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='PROCEDURE' AND ROUTINE_SCHEMA='dbo' AND ROUTINE_NAME = 'StoredProcedureName') BEGIN DROP PROCEDURE [dbo].[StoredProcedureName] END --=============================== -- Drop a UDF --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.ROUTINES WHERE ROUTINE_TYPE='FUNCTION' AND ROUTINE_SCHEMA='dbo' AND  ROUTINE_NAME = 'UDFName') BEGIN DROP FUNCTION [dbo].[UDFName] END --=============================== -- Drop an Index --=============================== IF EXISTS (SELECT * FROM SYS.INDEXES WHERE name = 'IndexName') BEGIN DROP INDEX TableName.IndexName END --=============================== -- Drop a Schema --=============================== IF EXISTS (SELECT * FROM INFORMATION_SCHEMA.SCHEMATA WHERE SCHEMA_NAME = 'SchemaName') BEGIN EXEC('DROP SCHEMA SchemaName') END

    Read the article

  • BizTalk 2009 - SQL Server Job Configuration

    - by StuartBrierley
    Following the installation of Biztalk Server 2009 on my development laptop I used the BizTalk Server Best Practice Analyser which highlighted the fact that two of the SQL Server Agent jobs that BizTalk relies on were not running successfully.  Upon investigation it turned out that these jobs needed to be configured before they would run successfully. To configure these jobs open SQL Server Management Studio, expand SQL Server Agent > Jobs and double click on the appropriate job.  Select Steps and then edit the appropriate entries. Backup BizTalk Server (BizTalkMgmtDb) This job is comprised of three steps BackupFull, MarkAndBackupLog and ClearBackupHistory. BackupFull exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ The frequency here is set/left as daily The name is left as BTS You must provide a full destination path for the backup files to be stored. There are also two optional parameters: A flag that controls if the job forces a full backup if a partial backup fails A parameter to control the time of day to run the full backup; the default is midnight UTC time For example: exec [dbo].[sp_BackupAllFull_Schedule] ‘d’ /* Frequency */,‘BTS’ /* Name */,‘<destination path>’ /* location of backup files */ , 0, 22 MarkAndBackUpLog exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ You must provide a destination path for the log backups. Optionally you can also add an extra parameter that tells the procedure to use local time: exec [dbo].[sp_MarkAll] ‘BTS’ /* Log mark name */,’<destination path>’  /*location of backup files */ ,1 Clear Backup History exec [dbo].[sp_DeleteBackupHistory] @DaysToKeep=7 This will clear out the instances in the MarkLog table older than 7 days.    DTA Purge and Archive (BizTalkDTADb) This job is comprised of a single step. Archive and Purge exec dtasp_BackupAndPurgeTrackingDatabase 0, --@nLiveHours tinyint, 1, --@nLiveDays tinyint = 0, 30, --@nHardDeleteDays tinyint = 0, null, --@nvcFolder nvarchar(1024) = null, null, --@nvcValidatingServer sysname = null, 0 --@fForceBackup int = 0 Any completed instance that is older than the live days plus live hours will be deleted, as will any associated data. Any data older than the HardDeleteDays will be deleted - this means that those long running orchestration instances that would otherwise never be purged will at some point have their data cleared down while allowing the instance to continue, thus preventing the DTA databse from growing indefinitely.  This should always be greater than the soft purge window. The NVC folder is the path for the backup files, if this is null the job will not run failing with the error : DTA Purge and Archive (BizTalkDTADb) Job failed SQL Server Management Studio, job activity monitor, view history The @nvcFolder parameter cannot be null. Archive and Purge step How long you choose to keep instances in the Tracking Database is really up to you. For development I have set this up as: exec dtasp_BackupAndPurgeTrackingDatabase 0, 1, 30, ’<destination path>’, null, 0 On a live server you may want to adjust these figures: exec dtasp_BackupAndPurgeTrackingDatabase 0, 15, 20, ’<destination path>’, null, 0

    Read the article

  • I see no LOBs!

    - by Paul White
    Is it possible to see LOB (large object) logical reads from STATISTICS IO output on a table with no LOB columns? I was asked this question today by someone who had spent a good fraction of their afternoon trying to work out why this was occurring – even going so far as to re-run DBCC CHECKDB to see if any corruption had taken place.  The table in question wasn’t particularly pretty – it had grown somewhat organically over time, with new columns being added every so often as the need arose.  Nevertheless, it remained a simple structure with no LOB columns – no TEXT or IMAGE, no XML, no MAX types – nothing aside from ordinary INT, MONEY, VARCHAR, and DATETIME types.  To add to the air of mystery, not every query that ran against the table would report LOB logical reads – just sometimes – but when it did, the query often took much longer to execute. Ok, enough of the pre-amble.  I can’t reproduce the exact structure here, but the following script creates a table that will serve to demonstrate the effect: IF OBJECT_ID(N'dbo.Test', N'U') IS NOT NULL DROP TABLE dbo.Test GO CREATE TABLE dbo.Test ( row_id NUMERIC IDENTITY NOT NULL,   col01 NVARCHAR(450) NOT NULL, col02 NVARCHAR(450) NOT NULL, col03 NVARCHAR(450) NOT NULL, col04 NVARCHAR(450) NOT NULL, col05 NVARCHAR(450) NOT NULL, col06 NVARCHAR(450) NOT NULL, col07 NVARCHAR(450) NOT NULL, col08 NVARCHAR(450) NOT NULL, col09 NVARCHAR(450) NOT NULL, col10 NVARCHAR(450) NOT NULL, CONSTRAINT [PK dbo.Test row_id] PRIMARY KEY CLUSTERED (row_id) ) ; The next script loads the ten variable-length character columns with one-character strings in the first row, two-character strings in the second row, and so on down to the 450th row: WITH Numbers AS ( -- Generates numbers 1 - 450 inclusive SELECT TOP (450) n = ROW_NUMBER() OVER (ORDER BY (SELECT 0)) FROM master.sys.columns C1, master.sys.columns C2, master.sys.columns C3 ORDER BY n ASC ) INSERT dbo.Test WITH (TABLOCKX) SELECT REPLICATE(N'A', N.n), REPLICATE(N'B', N.n), REPLICATE(N'C', N.n), REPLICATE(N'D', N.n), REPLICATE(N'E', N.n), REPLICATE(N'F', N.n), REPLICATE(N'G', N.n), REPLICATE(N'H', N.n), REPLICATE(N'I', N.n), REPLICATE(N'J', N.n) FROM Numbers AS N ORDER BY N.n ASC ; Once those two scripts have run, the table contains 450 rows and 10 columns of data like this: Most of the time, when we query data from this table, we don’t see any LOB logical reads, for example: -- Find the maximum length of the data in -- column 5 for a range of rows SELECT result = MAX(DATALENGTH(T.col05)) FROM dbo.Test AS T WHERE row_id BETWEEN 50 AND 100 ; But with a different query… -- Read all the data in column 1 SELECT result = MAX(DATALENGTH(T.col01)) FROM dbo.Test AS T ; …suddenly we have 49 LOB logical reads, as well as the ‘normal’ logical reads we would expect. The Explanation If we had tried to create this table in SQL Server 2000, we would have received a warning message to say that future INSERT or UPDATE operations on the table might fail if the resulting row exceeded the in-row storage limit of 8060 bytes.  If we needed to store more data than would fit in an 8060 byte row (including internal overhead) we had to use a LOB column – TEXT, NTEXT, or IMAGE.  These special data types store the large data values in a separate structure, with just a small pointer left in the original row. Row Overflow SQL Server 2005 introduced a feature called row overflow, which allows one or more variable-length columns in a row to move to off-row storage if the data in a particular row would otherwise exceed 8060 bytes.  You no longer receive a warning when creating (or altering) a table that might need more than 8060 bytes of in-row storage; if SQL Server finds that it can no longer fit a variable-length column in a particular row, it will silently move one or more of these columns off the row into a separate allocation unit. Only variable-length columns can be moved in this way (for example the (N)VARCHAR, VARBINARY, and SQL_VARIANT types).  Fixed-length columns (like INTEGER and DATETIME for example) never move into ‘row overflow’ storage.  The decision to move a column off-row is done on a row-by-row basis – so data in a particular column might be stored in-row for some table records, and off-row for others. In general, if SQL Server finds that it needs to move a column into row-overflow storage, it moves the largest variable-length column record for that row.  Note that in the case of an UPDATE statement that results in the 8060 byte limit being exceeded, it might not be the column that grew that is moved! Sneaky LOBs Anyway, that’s all very interesting but I don’t want to get too carried away with the intricacies of row-overflow storage internals.  The point is that it is now possible to define a table with non-LOB columns that will silently exceed the old row-size limit and result in ordinary variable-length columns being moved to off-row storage.  Adding new columns to a table, expanding an existing column definition, or simply storing more data in a column than you used to – all these things can result in one or more variable-length columns being moved off the row. Note that row-overflow storage is logically quite different from old-style LOB and new-style MAX data type storage – individual variable-length columns are still limited to 8000 bytes each – you can just have more of them now.  Having said that, the physical mechanisms involved are very similar to full LOB storage – a column moved to row-overflow leaves a 24-byte pointer record in the row, and the ‘separate storage’ I have been talking about is structured very similarly to both old-style LOBs and new-style MAX types.  The disadvantages are also the same: when SQL Server needs a row-overflow column value it needs to follow the in-row pointer a navigate another chain of pages, just like retrieving a traditional LOB. And Finally… In the example script presented above, the rows with row_id values from 402 to 450 inclusive all exceed the total in-row storage limit of 8060 bytes.  A SELECT that references a column in one of those rows that has moved to off-row storage will incur one or more lob logical reads as the storage engine locates the data.  The results on your system might vary slightly depending on your settings, of course; but in my tests only column 1 in rows 402-450 moved off-row.  You might like to play around with the script – updating columns, changing data type lengths, and so on – to see the effect on lob logical reads and which columns get moved when.  You might even see row-overflow columns moving back in-row if they are updated to be smaller (hint: reduce the size of a column entry by at least 1000 bytes if you hope to see this). Be aware that SQL Server will not warn you when it moves ‘ordinary’ variable-length columns into overflow storage, and it can have dramatic effects on performance.  It makes more sense than ever to choose column data types sensibly.  If you make every column a VARCHAR(8000) or NVARCHAR(4000), and someone stores data that results in a row needing more than 8060 bytes, SQL Server might turn some of your column data into pseudo-LOBs – all without saying a word. Finally, some people make a distinction between ordinary LOBs (those that can hold up to 2GB of data) and the LOB-like structures created by row-overflow (where columns are still limited to 8000 bytes) by referring to row-overflow LOBs as SLOBs.  I find that quite appealing, but the ‘S’ stands for ‘small’, which makes expanding the whole acronym a little daft-sounding…small large objects anyone? © Paul White 2011 email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • Why don't languages include implication as a logical operator?

    - by Maciej Piechotka
    It might be a strange question, but why there is no implication as a logical operator in many languages (Java, C, C++, Python Haskell - although as last one have user defined operators its trivial to add it)? I find logical implication much clearer to write (particularly in asserts or assert-like expressions) then negation with or: encrypt(buf, key, mode, iv = null) { assert (mode != ECB --> iv != null); assert (mode == ECB || iv != null); assert (implies(mode != ECB, iv != null)); // User-defined function }

    Read the article

  • How to configure a Custom Datacontract Serializer or XMLSerializer

    - by user364445
    Im haveing some xml that have this structure <Person Id="*****" Name="*****"> <AccessControlEntries> <AccessControlEntry Id="*****" Name="****"/> </AccessControlEntries> <AccessControls /> <IdentityGroups> <IdentityGroup Id="****" Name="*****" /> </IdentityGroups></Person> and i also have this entities [DataContract(IsReference = true)] public abstract class EntityBase { protected bool serializing; [DataMember(Order = 1)] [XmlAttribute()] public string Id { get; set; } [DataMember(Order = 2)] [XmlAttribute()] public string Name { get; set; } [OnDeserializing()] public void OnDeserializing(StreamingContext context) { this.Initialize(); } [OnSerializing()] public void OnSerializing(StreamingContext context) { this.serializing = true; } [OnSerialized()] public void OnSerialized(StreamingContext context) { this.serializing = false; } public abstract void Initialize(); public string ToXml() { var settings = new System.Xml.XmlWriterSettings(); settings.Indent = true; settings.OmitXmlDeclaration = true; var sb = new System.Text.StringBuilder(); using (var writer = System.Xml.XmlWriter.Create(sb, settings)) { var serializer = new XmlSerializer(this.GetType()); serializer.Serialize(writer, this); } return sb.ToString(); } } [DataContract()] public abstract class Identity : EntityBase { private EntitySet<AccessControlEntry> accessControlEntries; private EntitySet<IdentityGroup> identityGroups; public Identity() { Initialize(); } [DataMember(Order = 3, EmitDefaultValue = false)] [Association(Name = "AccessControlEntries")] public EntitySet<AccessControlEntry> AccessControlEntries { get { if ((this.serializing && (this.accessControlEntries==null || this.accessControlEntries.HasLoadedOrAssignedValues == false))) { return null; } return accessControlEntries; } set { accessControlEntries.Assign(value); } } [DataMember(Order = 4, EmitDefaultValue = false)] [Association(Name = "IdentityGroups")] public EntitySet<IdentityGroup> IdentityGroups { get { if ((this.serializing && (this.identityGroups == null || this.identityGroups.HasLoadedOrAssignedValues == false))) { return null; } return identityGroups; } set { identityGroups.Assign(value); } } private void attach_accessControlEntry(AccessControlEntry entity) { entity.Identities.Add(this); } private void dettach_accessControlEntry(AccessControlEntry entity) { entity.Identities.Remove(this); } private void attach_IdentityGroup(IdentityGroup entity) { entity.MemberIdentites.Add(this); } private void dettach_IdentityGroup(IdentityGroup entity) { entity.MemberIdentites.Add(this); } public override void Initialize() { this.accessControlEntries = new EntitySet<AccessControlEntry>( new Action<AccessControlEntry>(this.attach_accessControlEntry), new Action<AccessControlEntry>(this.dettach_accessControlEntry)); this.identityGroups = new EntitySet<IdentityGroup>( new Action<IdentityGroup>(this.attach_IdentityGroup), new Action<IdentityGroup>(this.dettach_IdentityGroup)); } } [XmlType(TypeName = "AccessControlEntry")] public class AccessControlEntry : EntityBase, INotifyPropertyChanged { private EntitySet<Service> services; private EntitySet<Identity> identities; private EntitySet<Permission> permissions; public AccessControlEntry() { services = new EntitySet<Service>(new Action<Service>(attach_Service), new Action<Service>(dettach_Service)); identities = new EntitySet<Identity>(new Action<Identity>(attach_Identity), new Action<Identity>(dettach_Identity)); permissions = new EntitySet<Permission>(new Action<Permission>(attach_Permission), new Action<Permission>(dettach_Permission)); } [DataMember(Order = 3, EmitDefaultValue = false)] public EntitySet<Permission> Permissions { get { if ((this.serializing && (this.permissions.HasLoadedOrAssignedValues == false))) { return null; } return permissions; } set { permissions.Assign(value); } } [DataMember(Order = 4, EmitDefaultValue = false)] public EntitySet<Identity> Identities { get { if ((this.serializing && (this.identities.HasLoadedOrAssignedValues == false))) { return null; } return identities; } set { identities.Assign(identities); } } [DataMember(Order = 5, EmitDefaultValue = false)] public EntitySet<Service> Services { get { if ((this.serializing && (this.services.HasLoadedOrAssignedValues == false))) { return null; } return services; } set { services.Assign(value); } } private void attach_Permission(Permission entity) { entity.AccessControlEntires.Add(this); } private void dettach_Permission(Permission entity) { entity.AccessControlEntires.Remove(this); } private void attach_Identity(Identity entity) { entity.AccessControlEntries.Add(this); } private void dettach_Identity(Identity entity) { entity.AccessControlEntries.Remove(this); } private void attach_Service(Service entity) { entity.AccessControlEntries.Add(this); } private void dettach_Service(Service entity) { entity.AccessControlEntries.Remove(this); } #region INotifyPropertyChanged Members public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string name) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) handler(this, new PropertyChangedEventArgs(name)); } #endregion public override void Initialize() { throw new NotImplementedException(); } } [DataContract()] [XmlType(TypeName = "Person")] public class Person : Identity { private EntityRef<Login> login; [DataMember(Order = 3)] [XmlAttribute()] public string Nombre { get; set; } [DataMember(Order = 4)] [XmlAttribute()] public string Apellidos { get; set; } [DataMember(Order = 5)] public Login Login { get { return login.Entity; } set { var previousValue = this.login.Entity; if (((previousValue != value) || (this.login.HasLoadedOrAssignedValue == false))) { if ((previousValue != null)) { this.login.Entity = null; previousValue.Person = null; } this.login.Entity = value; if ((value != null)) value.Person = this; } } } public override void Initialize() { base.Initialize(); } } [DataContract()] [XmlType(TypeName = "Login")] public class Login : EntityBase { private EntityRef<Person> person; [DataMember(Order = 3)] public string UserID { get; set; } [DataMember(Order = 4)] public string Contrasena { get; set; } [DataMember(Order = 5)] public Domain Dominio { get; set; } public Person Person { get { return person.Entity; } set { var previousValue = this.person.Entity; if (((previousValue != value) || (this.person.HasLoadedOrAssignedValue == false))) { if ((previousValue != null)) { this.person.Entity = null; previousValue.Login = null; } this.person.Entity = value; if ((value != null)) value.Login = this; } } } public override void Initialize() { throw new NotImplementedException(); } } [DataContract()] [XmlType(TypeName = "IdentityGroup")] public class IdentityGroup : Identity { private EntitySet<Identity> memberIdentities; public IdentityGroup() { Initialize(); } public override void Initialize() { this.memberIdentities = new EntitySet<Identity>(new Action<Identity>(this.attach_Identity), new Action<Identity>(this.dettach_Identity)); } [DataMember(Order = 3, EmitDefaultValue = false)] [Association(Name = "MemberIdentities")] public EntitySet<Identity> MemberIdentites { get { if ((this.serializing && (this.memberIdentities.HasLoadedOrAssignedValues == false))) { return null; } return memberIdentities; } set { memberIdentities.Assign(value); } } private void attach_Identity(Identity entity) { entity.IdentityGroups.Add(this); } private void dettach_Identity(Identity entity) { entity.IdentityGroups.Remove(this); } } [DataContract()] [XmlType(TypeName = "Group")] public class Group : Identity { public override void Initialize() { throw new NotImplementedException(); } } but the ToXml() response something like this <Person xmlns:xsi="************" xmlns:xsd="******" ID="******" Name="*****"/><AccessControlEntries/></Person> but what i want is something like this <Person Id="****" Name="***" Nombre="****"> <AccessControlEntries/> <IdentityGroups/> </Person>

    Read the article

  • Non recursive way to position a genogram in 2D points for x axis. Descendant are below

    - by Nassign
    I currently was tasked to make a genogram for a family consisting of siblings, parents with aunts and uncles with grandparents and greatgrandparents for only blood relatives. My current algorithm is using recursion. but I am wondering how to do it in non recursive way to make it more efficient. it is programmed in c# using graphics to draw on a bitmap. Current algorithm for calculating x position, the y position is by getting the generation number. public void StartCalculatePosition() { // Search the start node (The only node with targetFlg set to true) Person start = null; foreach (Person p in PersonDic.Values) { if (start == null) start = p; if (p.Targetflg) { start = p; break; } } CalcPositionRecurse(start); // Normalize the position (shift all values to positive value) // Get the minimum value (must be negative) // Then offset the position of all marriage and person with that to make it start from zero float minPosition = float.MaxValue; foreach (Person p in PersonDic.Values) { if (minPosition > p.Position) { minPosition = p.Position; } } if (minPosition < 0) { foreach (Person p in PersonDic.Values) { p.Position -= minPosition; } foreach (Marriage m in MarriageList) { m.ParentsPosition -= minPosition; m.ChildrenPosition -= minPosition; } } } /// <summary> /// Calculate position of genogram using recursion /// </summary> /// <param name="psn"></param> private void CalcPositionRecurse(Person psn) { // End the recursion if (psn.BirthMarriage == null || psn.BirthMarriage.Parents.Count == 0) { psn.Position = 0.0f; if (psn.BirthMarriage != null) { psn.BirthMarriage.ParentsPosition = 0.0f; psn.BirthMarriage.ChildrenPosition = 0.0f; } CalculateSiblingPosition(psn); return; } // Left recurse if (psn.Father != null) { CalcPositionRecurse(psn.Father); } // Right recurse if (psn.Mother != null) { CalcPositionRecurse(psn.Mother); } // Merge Position if (psn.Father != null && psn.Mother != null) { AdjustConflict(psn.Father, psn.Mother); // Position person in center of parent psn.Position = (psn.Father.Position + psn.Mother.Position) / 2; psn.BirthMarriage.ParentsPosition = psn.Position; psn.BirthMarriage.ChildrenPosition = psn.Position; } else { // Single mom or single dad if (psn.Father != null) { psn.Position = psn.Father.Position; psn.BirthMarriage.ParentsPosition = psn.Position; psn.BirthMarriage.ChildrenPosition = psn.Position; } else if (psn.Mother != null) { psn.Position = psn.Mother.Position; psn.BirthMarriage.ParentsPosition = psn.Position; psn.BirthMarriage.ChildrenPosition = psn.Position; } else { // Should not happen, checking in start of function } } // Arrange the siblings base on my position (left younger, right older) CalculateSiblingPosition(psn); } private float GetRightBoundaryAncestor(Person psn) { float rPos = psn.Position; // Get the rightmost position among siblings foreach (Person sibling in psn.Siblings) { if (sibling.Position > rPos) { rPos = sibling.Position; } } if (psn.Father != null) { float rFatherPos = GetRightBoundaryAncestor(psn.Father); if (rFatherPos > rPos) { rPos = rFatherPos; } } if (psn.Mother != null) { float rMotherPos = GetRightBoundaryAncestor(psn.Mother); if (rMotherPos > rPos) { rPos = rMotherPos; } } return rPos; } private float GetLeftBoundaryAncestor(Person psn) { float rPos = psn.Position; // Get the rightmost position among siblings foreach (Person sibling in psn.Siblings) { if (sibling.Position < rPos) { rPos = sibling.Position; } } if (psn.Father != null) { float rFatherPos = GetLeftBoundaryAncestor(psn.Father); if (rFatherPos < rPos) { rPos = rFatherPos; } } if (psn.Mother != null) { float rMotherPos = GetLeftBoundaryAncestor(psn.Mother); if (rMotherPos < rPos) { rPos = rMotherPos; } } return rPos; } /// <summary> /// Check if two parent group has conflict and compensate on the conflict /// </summary> /// <param name="leftGroup"></param> /// <param name="rightGroup"></param> public void AdjustConflict(Person leftGroup, Person rightGroup) { float leftMax = GetRightBoundaryAncestor(leftGroup); leftMax += 0.5f; float rightMin = GetLeftBoundaryAncestor(rightGroup); rightMin -= 0.5f; float diff = leftMax - rightMin; if (diff > 0.0f) { float moveHalf = Math.Abs(diff) / 2; RecurseMoveAncestor(leftGroup, 0 - moveHalf); RecurseMoveAncestor(rightGroup, moveHalf); } } /// <summary> /// Recursively move a person and all his/her ancestor /// </summary> /// <param name="psn"></param> /// <param name="moveUnit"></param> public void RecurseMoveAncestor(Person psn, float moveUnit) { psn.Position += moveUnit; foreach (Person siblings in psn.Siblings) { if (siblings.Id != psn.Id) { siblings.Position += moveUnit; } } if (psn.BirthMarriage != null) { psn.BirthMarriage.ChildrenPosition += moveUnit; psn.BirthMarriage.ParentsPosition += moveUnit; } if (psn.Father != null) { RecurseMoveAncestor(psn.Father, moveUnit); } if (psn.Mother != null) { RecurseMoveAncestor(psn.Mother, moveUnit); } } /// <summary> /// Calculate the position of the siblings /// </summary> /// <param name="psn"></param> /// <param name="anchor"></param> public void CalculateSiblingPosition(Person psn) { if (psn.Siblings.Count == 0) { return; } List<Person> sibling = psn.Siblings; int argidx; for (argidx = 0; argidx < sibling.Count; argidx++) { if (sibling[argidx].Id == psn.Id) { break; } } // Compute position for each brother that is younger that person int idx; for (idx = argidx - 1; idx >= 0; idx--) { sibling[idx].Position = sibling[idx + 1].Position - 1; } for (idx = argidx + 1; idx < sibling.Count; idx++) { sibling[idx].Position = sibling[idx - 1].Position + 1; } }

    Read the article

  • OpenCL: Strange buffer or image bahaviour with NVidia but not Amd

    - by Alex R.
    I have a big problem (on Linux): I create a buffer with defined data, then an OpenCL kernel takes this data and puts it into an image2d_t. When working on an AMD C50 (Fusion CPU/GPU) the program works as desired, but on my GeForce 9500 GT the given kernel computes the correct result very rarely. Sometimes the result is correct, but very often it is incorrect. Sometimes it depends on very strange changes like removing unused variable declarations or adding a newline. I realized that disabling the optimization will increase the probability to fail. I have the most actual display driver in both systems. Here is my reduced code: #include <CL/cl.h> #include <string> #include <iostream> #include <sstream> #include <cmath> void checkOpenCLErr(cl_int err, std::string name){ const char* errorString[] = { "CL_SUCCESS", "CL_DEVICE_NOT_FOUND", "CL_DEVICE_NOT_AVAILABLE", "CL_COMPILER_NOT_AVAILABLE", "CL_MEM_OBJECT_ALLOCATION_FAILURE", "CL_OUT_OF_RESOURCES", "CL_OUT_OF_HOST_MEMORY", "CL_PROFILING_INFO_NOT_AVAILABLE", "CL_MEM_COPY_OVERLAP", "CL_IMAGE_FORMAT_MISMATCH", "CL_IMAGE_FORMAT_NOT_SUPPORTED", "CL_BUILD_PROGRAM_FAILURE", "CL_MAP_FAILURE", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "", "CL_INVALID_VALUE", "CL_INVALID_DEVICE_TYPE", "CL_INVALID_PLATFORM", "CL_INVALID_DEVICE", "CL_INVALID_CONTEXT", "CL_INVALID_QUEUE_PROPERTIES", "CL_INVALID_COMMAND_QUEUE", "CL_INVALID_HOST_PTR", "CL_INVALID_MEM_OBJECT", "CL_INVALID_IMAGE_FORMAT_DESCRIPTOR", "CL_INVALID_IMAGE_SIZE", "CL_INVALID_SAMPLER", "CL_INVALID_BINARY", "CL_INVALID_BUILD_OPTIONS", "CL_INVALID_PROGRAM", "CL_INVALID_PROGRAM_EXECUTABLE", "CL_INVALID_KERNEL_NAME", "CL_INVALID_KERNEL_DEFINITION", "CL_INVALID_KERNEL", "CL_INVALID_ARG_INDEX", "CL_INVALID_ARG_VALUE", "CL_INVALID_ARG_SIZE", "CL_INVALID_KERNEL_ARGS", "CL_INVALID_WORK_DIMENSION", "CL_INVALID_WORK_GROUP_SIZE", "CL_INVALID_WORK_ITEM_SIZE", "CL_INVALID_GLOBAL_OFFSET", "CL_INVALID_EVENT_WAIT_LIST", "CL_INVALID_EVENT", "CL_INVALID_OPERATION", "CL_INVALID_GL_OBJECT", "CL_INVALID_BUFFER_SIZE", "CL_INVALID_MIP_LEVEL", "CL_INVALID_GLOBAL_WORK_SIZE", }; if (err != CL_SUCCESS) { std::stringstream str; str << errorString[-err] << " (" << err << ")"; throw std::string(name)+(str.str()); } } int main(){ try{ cl_context m_context; cl_platform_id* m_platforms; unsigned int m_numPlatforms; cl_command_queue m_queue; cl_device_id m_device; cl_int error = 0; // Used to handle error codes clGetPlatformIDs(0,NULL,&m_numPlatforms); m_platforms = new cl_platform_id[m_numPlatforms]; error = clGetPlatformIDs(m_numPlatforms,m_platforms,&m_numPlatforms); checkOpenCLErr(error, "getPlatformIDs"); // Device error = clGetDeviceIDs(m_platforms[0], CL_DEVICE_TYPE_GPU, 1, &m_device, NULL); checkOpenCLErr(error, "getDeviceIDs"); // Context cl_context_properties properties[] = { CL_CONTEXT_PLATFORM, (cl_context_properties)(m_platforms[0]), 0}; m_context = clCreateContextFromType(properties, CL_DEVICE_TYPE_GPU, NULL, NULL, NULL); // m_private->m_context = clCreateContext(properties, 1, &m_private->m_device, NULL, NULL, &error); checkOpenCLErr(error, "Create context"); // Command-queue m_queue = clCreateCommandQueue(m_context, m_device, 0, &error); checkOpenCLErr(error, "Create command queue"); //Build program and kernel const char* source = "#pragma OPENCL EXTENSION cl_khr_byte_addressable_store : enable\n" "\n" "__kernel void bufToImage(__global unsigned char* in, __write_only image2d_t out, const unsigned int offset_x, const unsigned int image_width , const unsigned int maxval ){\n" "\tint i = get_global_id(0);\n" "\tint j = get_global_id(1);\n" "\tint width = get_global_size(0);\n" "\tint height = get_global_size(1);\n" "\n" "\tint pos = j*image_width*3+(offset_x+i)*3;\n" "\tif( maxval < 256 ){\n" "\t\tfloat4 c = (float4)(in[pos],in[pos+1],in[pos+2],1.0f);\n" "\t\tc.x /= maxval;\n" "\t\tc.y /= maxval;\n" "\t\tc.z /= maxval;\n" "\t\twrite_imagef(out, (int2)(i,j), c);\n" "\t}else{\n" "\t\tfloat4 c = (float4)(255.0f*in[2*pos]+in[2*pos+1],255.0f*in[2*pos+2]+in[2*pos+3],255.0f*in[2*pos+4]+in[2*pos+5],1.0f);\n" "\t\tc.x /= maxval;\n" "\t\tc.y /= maxval;\n" "\t\tc.z /= maxval;\n" "\t\twrite_imagef(out, (int2)(i,j), c);\n" "\t}\n" "}\n" "\n" "__constant sampler_t imageSampler = CLK_NORMALIZED_COORDS_FALSE | CLK_ADDRESS_CLAMP_TO_EDGE | CLK_FILTER_NEAREST;\n" "\n" "__kernel void imageToBuf(__read_only image2d_t in, __global unsigned char* out, const unsigned int offset_x, const unsigned int image_width ){\n" "\tint i = get_global_id(0);\n" "\tint j = get_global_id(1);\n" "\tint pos = j*image_width*3+(offset_x+i)*3;\n" "\tfloat4 c = read_imagef(in, imageSampler, (int2)(i,j));\n" "\tif( c.x <= 1.0f && c.y <= 1.0f && c.z <= 1.0f ){\n" "\t\tout[pos] = c.x*255.0f;\n" "\t\tout[pos+1] = c.y*255.0f;\n" "\t\tout[pos+2] = c.z*255.0f;\n" "\t}else{\n" "\t\tout[pos] = 200.0f;\n" "\t\tout[pos+1] = 0.0f;\n" "\t\tout[pos+2] = 255.0f;\n" "\t}\n" "}\n"; cl_int err; cl_program prog = clCreateProgramWithSource(m_context,1,&source,NULL,&err); if( -err != CL_SUCCESS ) throw std::string("clCreateProgramWithSources"); err = clBuildProgram(prog,0,NULL,"-cl-opt-disable",NULL,NULL); if( -err != CL_SUCCESS ) throw std::string("clBuildProgram(fromSources)"); cl_kernel kernel = clCreateKernel(prog,"bufToImage",&err); checkOpenCLErr(err,"CreateKernel"); cl_uint imageWidth = 8; cl_uint imageHeight = 9; //Initialize datas cl_uint maxVal = 255; cl_uint offsetX = 0; int size = imageWidth*imageHeight*3; int resSize = imageWidth*imageHeight*4; cl_uchar* data = new cl_uchar[size]; cl_float* expectedData = new cl_float[resSize]; for( int i = 0,j=0; i < size; i++,j++ ){ data[i] = (cl_uchar)i; expectedData[j] = (cl_float)i/255.0f; if ( i%3 == 2 ){ j++; expectedData[j] = 1.0f; } } cl_mem inBuffer = clCreateBuffer(m_context,CL_MEM_READ_ONLY|CL_MEM_COPY_HOST_PTR,size*sizeof(cl_uchar),data,&err); checkOpenCLErr(err, "clCreateBuffer()"); clFinish(m_queue); cl_image_format imgFormat; imgFormat.image_channel_order = CL_RGBA; imgFormat.image_channel_data_type = CL_FLOAT; cl_mem outImg = clCreateImage2D( m_context, CL_MEM_READ_WRITE, &imgFormat, imageWidth, imageHeight, 0, NULL, &err ); checkOpenCLErr(err,"get2DImage()"); clFinish(m_queue); size_t kernelRegion[]={imageWidth,imageHeight}; size_t kernelWorkgroup[]={1,1}; //Fill kernel with data clSetKernelArg(kernel,0,sizeof(cl_mem),&inBuffer); clSetKernelArg(kernel,1,sizeof(cl_mem),&outImg); clSetKernelArg(kernel,2,sizeof(cl_uint),&offsetX); clSetKernelArg(kernel,3,sizeof(cl_uint),&imageWidth); clSetKernelArg(kernel,4,sizeof(cl_uint),&maxVal); //Run kernel err = clEnqueueNDRangeKernel(m_queue,kernel,2,NULL,kernelRegion,kernelWorkgroup,0,NULL,NULL); checkOpenCLErr(err,"RunKernel"); clFinish(m_queue); //Check resulting data for validty cl_float* computedData = new cl_float[resSize];; size_t region[]={imageWidth,imageHeight,1}; const size_t offset[] = {0,0,0}; err = clEnqueueReadImage(m_queue,outImg,CL_TRUE,offset,region,0,0,computedData,0,NULL,NULL); checkOpenCLErr(err, "readDataFromImage()"); clFinish(m_queue); for( int i = 0; i < resSize; i++ ){ if( fabs(expectedData[i]-computedData[i])>0.1 ){ std::cout << "Expected: \n"; for( int j = 0; j < resSize; j++ ){ std::cout << expectedData[j] << " "; } std::cout << "\nComputed: \n"; std::cout << "\n"; for( int j = 0; j < resSize; j++ ){ std::cout << computedData[j] << " "; } std::cout << "\n"; throw std::string("Error, computed and expected data are not the same!\n"); } } }catch(std::string& e){ std::cout << "\nCaught an exception: " << e << "\n"; return 1; } std::cout << "Works fine\n"; return 0; } I also uploaded the source code for you to make it easier to test it: http://www.file-upload.net/download-3513797/strangeOpenCLError.cpp.html Please can you tell me if I've done wrong anything? Is there any mistake in the code or is this a bug in my driver? Best reagards, Alex

    Read the article

  • Null reading in stream images? Unable to start activity ComponentInfo

    - by lasmith
    I have reviewed a lot of similar questions regarding not being able to launch an activity but they don't seem to quite match my problem. I am working on a simple black jack game but its force quitting. I suspect there is a problem with loading up the card png images I have. Stepping through the debugger it crashes right while in the resetGame() function. I'm sure I am doing something dumb. My Logcat: 10-15 20:21:43.309: E/AndroidRuntime(2863): FATAL EXCEPTION: main 10-15 20:21:43.309: E/AndroidRuntime(2863): java.lang.RuntimeException: Unable to start activity ComponentInfo{com.smith.blackjack/com.smith.blackjack.Main}: java.lang.NullPointerException 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2059) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.handleLaunchActivity(ActivityThread.java:2084) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.access$600(ActivityThread.java:130) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:1195) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.os.Handler.dispatchMessage(Handler.java:99) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.os.Looper.loop(Looper.java:137) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.main(ActivityThread.java:4745) 10-15 20:21:43.309: E/AndroidRuntime(2863): at java.lang.reflect.Method.invokeNative(Native Method) 10-15 20:21:43.309: E/AndroidRuntime(2863): at java.lang.reflect.Method.invoke(Method.java:511) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:786) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:553) 10-15 20:21:43.309: E/AndroidRuntime(2863): at dalvik.system.NativeStart.main(Native Method) 10-15 20:21:43.309: E/AndroidRuntime(2863): Caused by: java.lang.NullPointerException 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.DeckOfCards.<init>(DeckOfCards.java:17) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.Main.resetGame(Main.java:98) 10-15 20:21:43.309: E/AndroidRuntime(2863): at com.smith.blackjack.Main.onCreate(Main.java:67) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.Activity.performCreate(Activity.java:5008) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.Instrumentation.callActivityOnCreate(Instrumentation.java:1079) 10-15 20:21:43.309: E/AndroidRuntime(2863): at android.app.ActivityThread.performLaunchActivity(ActivityThread.java:2023) 10-15 20:21:43.309: E/AndroidRuntime(2863): ... 11 more My androidmanifest: <manifest xmlns:android="http://schemas.android.com/apk/res/android" package="com.smith.blackjack" android:versionCode="1" android:versionName="1.0" > <uses-sdk android:minSdkVersion="11" android:targetSdkVersion="15" /> <application android:icon="@drawable/ic_launcher" android:label="@string/app_name" android:theme="@style/AppTheme" > <activity android:name=".Main" android:label="@string/title_activity_main" > <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> </application> Here is my Main.java package com.smith.blackjack; import android.os.Bundle; import android.app.Activity; import android.content.res.AssetManager; import android.graphics.drawable.Drawable; import java.io.IOException; import java.io.InputStream; import android.util.Log; import android.view.View; import android.view.View.OnClickListener; import android.widget.Button; import android.widget.ImageView; public class Main extends Activity { private ImageView dealerCard0; private ImageView dealerCard1; private ImageView dealerCard2; private ImageView dealerCard3; private ImageView playerCard0; private ImageView playerCard1; private ImageView playerCard2; private ImageView playerCard3; private ImageView imgResult; private Button btnDeal; private Button btnDraw; private Button btnHold; private DeckOfCards deckOfCards; private int[] dealerValues; private int dealerSum; private int dealerCardNumber; private int[] playerValues; private int playerSum; private int playerCardNumber; private InputStream dealerHiddenCard; private Card dealerCard; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.main); dealerCard0 = (ImageView) findViewById(R.id.dealerCard0); dealerCard1 = (ImageView) findViewById(R.id.dealerCard1); dealerCard2 = (ImageView) findViewById(R.id.dealerCard2); dealerCard3 = (ImageView) findViewById(R.id.dealerCard3); playerCard0 = (ImageView) findViewById(R.id.playerCard0); playerCard1 = (ImageView) findViewById(R.id.playerCard1); playerCard2 = (ImageView) findViewById(R.id.playerCard2); playerCard3 = (ImageView) findViewById(R.id.playerCard3); imgResult = (ImageView) findViewById(R.id.imgResult); btnDeal = (Button) findViewById(R.id.deal); btnDraw = (Button) findViewById(R.id.draw); btnHold = (Button) findViewById(R.id.hold); btnDeal.setOnClickListener(btnDealListener); btnDraw.setOnClickListener(btnDrawListener); btnHold.setOnClickListener(btnHoldListener); resetGame(); } private void resetGame(){ AssetManager assets = getAssets(); dealerValues = new int[4]; playerValues = new int[4]; dealerSum = 0; playerSum = 0; dealerCardNumber = 0; playerCardNumber = 0; for (int i = 0; i < 4; i++) { dealerValues[i] = 0; playerValues[i] = 0; } try { InputStream stream = assets.open("cardback.png"); // stream = assets.open("cardback.png"); Drawable cardImage = Drawable.createFromStream(stream, null); dealerCard0.setImageDrawable(cardImage); dealerCard1.setImageDrawable(cardImage); dealerCard2.setImageDrawable(cardImage); dealerCard3.setImageDrawable(cardImage); playerCard0.setImageDrawable(cardImage); playerCard1.setImageDrawable(cardImage); playerCard2.setImageDrawable(cardImage); playerCard3.setImageDrawable(cardImage); imgResult.setImageDrawable(cardImage); deckOfCards = new DeckOfCards(); deckOfCards.shuffle(); assets.close(); } catch (IOException e){ Log.e("Reset Game", "Error Loading", e); } } public OnClickListener btnDealListener = new OnClickListener() { // @Override public void onClick(View v) { try { AssetManager assets = getAssets(); InputStream stream; // first player card Card newCard; newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); Drawable cardImage = Drawable.createFromStream(stream, newCard.File); playerCard0.setImageDrawable(cardImage); assets.close(); // second player card newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); playerCard1.setImageDrawable(cardImage); assets.close(); // first dealer card hidden newCard = deckOfCards.dealCard(); dealerCard = newCard; dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; dealerHiddenCard = assets.open(newCard.File); stream = assets.open("cardback.png"); cardImage = Drawable.createFromStream(stream, "cardback"); dealerCard0.setImageDrawable(cardImage); assets.close(); // second dealer card open newCard = deckOfCards.dealCard(); dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); dealerCard1.setImageDrawable(cardImage); assets.close(); } catch (IOException e){ Log.e("Deal", "Error Loading", e); } }; }; public OnClickListener btnDrawListener = new OnClickListener() { // @Override public void onClick(View v) { try { AssetManager assets = getAssets(); InputStream stream; // get next player card Card newCard; newCard = deckOfCards.dealCard(); playerValues[playerCardNumber] = newCard.faceValue; playerCardNumber++; stream = assets.open(newCard.File); Drawable cardImage = Drawable.createFromStream(stream, newCard.File); switch (playerCardNumber){ case 3: playerCard2.setImageDrawable(cardImage); case 4: playerCard3.setImageDrawable(cardImage); } assets.close(); } catch (IOException e){ Log.e("Draw", "Error Loading", e); } }; }; public OnClickListener btnHoldListener = new OnClickListener() { // @Override public void onClick(View v) { Drawable cardImage; // evaluate player hand playerSum = evaluate(playerValues); if (playerSum > 21){ // player losses } // flip over the dealer hidden card cardImage = Drawable.createFromStream(dealerHiddenCard, dealerCard.File); Card newCard; InputStream stream; AssetManager assets = getAssets(); for (int i=2; i<4; i++){ dealerSum = evaluate(dealerValues); if (dealerSum < 16 ) { newCard = deckOfCards.dealCard(); dealerValues[dealerCardNumber] = newCard.faceValue; dealerCardNumber++; try { stream = assets.open(newCard.File); cardImage = Drawable.createFromStream(stream, newCard.File); switch (dealerCardNumber){ case 3: dealerCard2.setImageDrawable(cardImage); case 4: dealerCard3.setImageDrawable(cardImage); } assets.close(); } catch (IOException e){ Log.e("Draw", "Error Loading", e); } if (dealerSum < playerSum) { // player wins } if (dealerSum > playerSum){ // dealer wins } if (dealerSum == playerSum){ // it is a draw } } } }; }; public int evaluate (int[]values) { int sumCards = 0; for (int i = 0; i < 4; i++){ sumCards += values[i]; } if (sumCards > 21) { for (int i = 0; i < 4; i++){ if (values[i] == 11) { values[i] = 1; sumCards -= 10; continue; } } } return sumCards; } } My DeckOfCards class: package com.smith.blackjack; import java.util.Random; public class DeckOfCards { private Card [] deck; private int currentCard; private static final int NUMBER_OF_CARDS = 52; private static final Random randomNumbers = new Random(); public DeckOfCards () { deck = new Card[NUMBER_OF_CARDS]; currentCard = 0 ; for(int count = 0; count < deck.length; count++) { deck[count].faceValue = count + 1; } } public void shuffle () { currentCard = 0; for (int first = 0; first < deck.length; first ++){ int second = randomNumbers.nextInt(NUMBER_OF_CARDS); int temp = deck[first].faceValue; deck[first].faceValue=deck[second].faceValue; deck[second].faceValue = temp; } } public Card dealCard(){ Card temp = new Card(); temp.faceValue = 0; temp.File = ""; if(currentCard < deck.length) { temp.faceValue = deck[currentCard].faceValue / 4; int suit = deck[currentCard].faceValue % 4; String suitString = ""; switch (suit){ case 0: suitString = "c"; case 1: suitString = "d"; case 2: suitString = "h"; case 3: suitString = "s"; } Integer face = temp.faceValue / 4 ; String faceString = face.toString(); temp.File = faceString + suitString + ".png"; switch (temp.faceValue){ case 11: temp.faceValue = 10; case 12: temp.faceValue = 10; case 13: temp.faceValue = 10; } return temp; } else return temp; } }

    Read the article

  • How to save/retrieve words to/from SQlite database?

    - by user998032
    Sorry if I repeat my question but I have still had no clues of what to do and how to deal with the question. My app is a dictionary. I assume that users will need to add words that they want to memorise to a Favourite list. Thus, I created a Favorite button that works on two phases: short-click to save the currently-view word into the Favourite list; and long-click to view the Favourite list so that users can click on any words to look them up again. I go for using a SQlite database to store the favourite words but I wonder how I can do this task. Specifically, my questions are: Should I use the current dictionary SQLite database or create a new SQLite database to favorite words? In each case, what codes do I have to write to cope with the mentioned task? Could anyone there kindly help? Here is the dictionary code: package mydict.app; import java.util.ArrayList; import android.database.Cursor; import android.database.sqlite.SQLiteDatabase; import android.database.sqlite.SQLiteException; import android.util.Log; public class DictionaryEngine { static final private String SQL_TAG = "[MyAppName - DictionaryEngine]"; private SQLiteDatabase mDB = null; private String mDBName; private String mDBPath; //private String mDBExtension; public ArrayList<String> lstCurrentWord = null; public ArrayList<String> lstCurrentContent = null; //public ArrayAdapter<String> adapter = null; public DictionaryEngine() { lstCurrentContent = new ArrayList<String>(); lstCurrentWord = new ArrayList<String>(); } public DictionaryEngine(String basePath, String dbName, String dbExtension) { //mDBExtension = getResources().getString(R.string.dbExtension); //mDBExtension = dbExtension; lstCurrentContent = new ArrayList<String>(); lstCurrentWord = new ArrayList<String>(); this.setDatabaseFile(basePath, dbName, dbExtension); } public boolean setDatabaseFile(String basePath, String dbName, String dbExtension) { if (mDB != null) { if (mDB.isOpen() == true) // Database is already opened { if (basePath.equals(mDBPath) && dbName.equals(mDBName)) // the opened database has the same name and path -> do nothing { Log.i(SQL_TAG, "Database is already opened!"); return true; } else { mDB.close(); } } } String fullDbPath=""; try { fullDbPath = basePath + dbName + "/" + dbName + dbExtension; mDB = SQLiteDatabase.openDatabase(fullDbPath, null, SQLiteDatabase.OPEN_READWRITE|SQLiteDatabase.NO_LOCALIZED_COLLATORS); } catch (SQLiteException ex) { ex.printStackTrace(); Log.i(SQL_TAG, "There is no valid dictionary database " + dbName +" at path " + basePath); return false; } if (mDB == null) { return false; } this.mDBName = dbName; this.mDBPath = basePath; Log.i(SQL_TAG,"Database " + dbName + " is opened!"); return true; } public void getWordList(String word) { String query; // encode input String wordEncode = Utility.encodeContent(word); if (word.equals("") || word == null) { query = "SELECT id,word FROM " + mDBName + " LIMIT 0,15" ; } else { query = "SELECT id,word FROM " + mDBName + " WHERE word >= '"+wordEncode+"' LIMIT 0,15"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); int indexWordColumn = result.getColumnIndex("Word"); int indexContentColumn = result.getColumnIndex("Content"); if (result != null) { int countRow=result.getCount(); Log.i(SQL_TAG, "countRow = " + countRow); lstCurrentWord.clear(); lstCurrentContent.clear(); if (countRow >= 1) { result.moveToFirst(); String strWord = Utility.decodeContent(result.getString(indexWordColumn)); String strContent = Utility.decodeContent(result.getString(indexContentColumn)); lstCurrentWord.add(0,strWord); lstCurrentContent.add(0,strContent); int i = 0; while (result.moveToNext()) { strWord = Utility.decodeContent(result.getString(indexWordColumn)); strContent = Utility.decodeContent(result.getString(indexContentColumn)); lstCurrentWord.add(i,strWord); lstCurrentContent.add(i,strContent); i++; } } result.close(); } } public Cursor getCursorWordList(String word) { String query; // encode input String wordEncode = Utility.encodeContent(word); if (word.equals("") || word == null) { query = "SELECT id,word FROM " + mDBName + " LIMIT 0,15" ; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE word >= '"+wordEncode+"' LIMIT 0,15"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public Cursor getCursorContentFromId(int wordId) { String query; // encode input if (wordId <= 0) { return null; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE Id = " + wordId ; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public Cursor getCursorContentFromWord(String word) { String query; // encode input if (word == null || word.equals("")) { return null; } else { query = "SELECT id,content,word FROM " + mDBName + " WHERE word = '" + word + "' LIMIT 0,1"; } //Log.i(SQL_TAG, "query = " + query); Cursor result = mDB.rawQuery(query,null); return result; } public void closeDatabase() { mDB.close(); } public boolean isOpen() { return mDB.isOpen(); } public boolean isReadOnly() { return mDB.isReadOnly(); } } And here is the code below the Favourite button to save to and load the Favourite list: btnAddFavourite = (ImageButton) findViewById(R.id.btnAddFavourite); btnAddFavourite.setOnClickListener(new View.OnClickListener() { @Override public void onClick(View v) { // Add code here to save the favourite, e.g. in the db. Toast toast = Toast.makeText(ContentView.this, R.string.messageWordAddedToFarvourite, Toast.LENGTH_SHORT); toast.show(); } }); btnAddFavourite.setOnLongClickListener(new View.OnLongClickListener() { @Override public boolean onLongClick(View v) { // Open the favourite Activity, which in turn will fetch the saved favourites, to show them. Intent intent = new Intent(getApplicationContext(), FavViewFavourite.class); intent.setFlags(Intent.FLAG_ACTIVITY_NEW_TASK); getApplicationContext().startActivity(intent); return false; } }); }

    Read the article

  • WPF unity Activation error occured while trying to get instance of type

    - by Traci
    I am getting the following error when trying to Initialise the Module using Unity and Prism. The DLL is found by return new DirectoryModuleCatalog() { ModulePath = @".\Modules" }; The dll is found and the Name is Found #region Constructors public AdminModule( IUnityContainer container, IScreenFactoryRegistry screenFactoryRegistry, IEventAggregator eventAggregator, IBusyService busyService ) : base(container, screenFactoryRegistry) { this.EventAggregator = eventAggregator; this.BusyService = busyService; } #endregion #region Properties protected IEventAggregator EventAggregator { get; set; } protected IBusyService BusyService { get; set; } #endregion public override void Initialize() { base.Initialize(); } #region Register Screen Factories protected override void RegisterScreenFactories() { this.ScreenFactoryRegistry.Register(ScreenKeyType.ApplicationAdmin, typeof(AdminScreenFactory)); } #endregion #region Register Views and Various Services protected override void RegisterViewsAndServices() { //View Models this.Container.RegisterType<IAdminViewModel, AdminViewModel>(); } #endregion the code that produces the error is: namespace Microsoft.Practices.Composite.Modularity protected virtual IModule CreateModule(string typeName) { Type moduleType = Type.GetType(typeName); if (moduleType == null) { throw new ModuleInitializeException(string.Format(CultureInfo.CurrentCulture, Properties.Resources.FailedToGetType, typeName)); } return (IModule)this.serviceLocator.GetInstance(moduleType); <-- Error Here } Can Anyone Help Me Error Log Below: General Information Additional Info: ExceptionManager.MachineName: xxxxx ExceptionManager.TimeStamp: 22/02/2010 10:16:55 AM ExceptionManager.FullName: Microsoft.ApplicationBlocks.ExceptionManagement, Version=1.0.3591.32238, Culture=neutral, PublicKeyToken=null ExceptionManager.AppDomainName: Infinity.vshost.exe ExceptionManager.ThreadIdentity: ExceptionManager.WindowsIdentity: xxxxx 1) Exception Information Exception Type: Microsoft.Practices.Composite.Modularity.ModuleInitializeException ModuleName: AdminModule Message: An exception occurred while initializing module 'AdminModule'. - The exception message was: Activation error occured while trying to get instance of type AdminModule, key "" Check the InnerException property of the exception for more information. If the exception occurred while creating an object in a DI container, you can exception.GetRootException() to help locate the root cause of the problem. Data: System.Collections.ListDictionaryInternal TargetSite: Void HandleModuleInitializationError(Microsoft.Practices.Composite.Modularity.ModuleInfo, System.String, System.Exception) HelpLink: NULL Source: Microsoft.Practices.Composite StackTrace Information at Microsoft.Practices.Composite.Modularity.ModuleInitializer.HandleModuleInitializationError(ModuleInfo moduleInfo, String assemblyName, Exception exception) at Microsoft.Practices.Composite.Modularity.ModuleInitializer.Initialize(ModuleInfo moduleInfo) at Microsoft.Practices.Composite.Modularity.ModuleManager.InitializeModule(ModuleInfo moduleInfo) at Microsoft.Practices.Composite.Modularity.ModuleManager.LoadModulesThatAreReadyForLoad() at Microsoft.Practices.Composite.Modularity.ModuleManager.OnModuleTypeLoaded(ModuleInfo typeLoadedModuleInfo, Exception error) at Microsoft.Practices.Composite.Modularity.FileModuleTypeLoader.BeginLoadModuleType(ModuleInfo moduleInfo, ModuleTypeLoadedCallback callback) at Microsoft.Practices.Composite.Modularity.ModuleManager.BeginRetrievingModule(ModuleInfo moduleInfo) at Microsoft.Practices.Composite.Modularity.ModuleManager.LoadModuleTypes(IEnumerable`1 moduleInfos) at Microsoft.Practices.Composite.Modularity.ModuleManager.LoadModulesWhenAvailable() at Microsoft.Practices.Composite.Modularity.ModuleManager.Run() at Microsoft.Practices.Composite.UnityExtensions.UnityBootstrapper.InitializeModules() at Infinity.Bootstrapper.InitializeModules() in D:\Projects\dotNet\Infinity\source\Inifinty\Infinity\Application Modules\BootStrapper.cs:line 75 at Microsoft.Practices.Composite.UnityExtensions.UnityBootstrapper.Run(Boolean runWithDefaultConfiguration) at Microsoft.Practices.Composite.UnityExtensions.UnityBootstrapper.Run() at Infinity.App.Application_Startup(Object sender, StartupEventArgs e) in D:\Projects\dotNet\Infinity\source\Inifinty\Infinity\App.xaml.cs:line 37 at System.Windows.Application.OnStartup(StartupEventArgs e) at System.Windows.Application.<.ctorb__0(Object unused) at System.Windows.Threading.ExceptionWrapper.InternalRealCall(Delegate callback, Object args, Boolean isSingleParameter) at System.Windows.Threading.ExceptionWrapper.TryCatchWhen(Object source, Delegate callback, Object args, Boolean isSingleParameter, Delegate catchHandler) 2) Exception Information Exception Type: Microsoft.Practices.ServiceLocation.ActivationException Message: Activation error occured while trying to get instance of type AdminModule, key "" Data: System.Collections.ListDictionaryInternal TargetSite: System.Object GetInstance(System.Type, System.String) HelpLink: NULL Source: Microsoft.Practices.ServiceLocation StackTrace Information at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType) at Microsoft.Practices.Composite.Modularity.ModuleInitializer.CreateModule(String typeName) at Microsoft.Practices.Composite.Modularity.ModuleInitializer.Initialize(ModuleInfo moduleInfo) 3) Exception Information Exception Type: Microsoft.Practices.Unity.ResolutionFailedException TypeRequested: AdminModule NameRequested: NULL Message: Resolution of the dependency failed, type = "Infinity.Modules.Admin.AdminModule", name = "". Exception message is: The current build operation (build key Build Key[Infinity.Modules.Admin.AdminModule, null]) failed: The parameter screenFactoryRegistry could not be resolved when attempting to call constructor Infinity.Modules.Admin.AdminModule(Microsoft.Practices.Unity.IUnityContainer container, PhoenixIT.IScreenFactoryRegistry screenFactoryRegistry, Microsoft.Practices.Composite.Events.IEventAggregator eventAggregator, PhoenixIT.IBusyService busyService). (Strategy type BuildPlanStrategy, index 3) Data: System.Collections.ListDictionaryInternal TargetSite: System.Object DoBuildUp(System.Type, System.Object, System.String) HelpLink: NULL Source: Microsoft.Practices.Unity StackTrace Information at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, String name) at Microsoft.Practices.Unity.UnityContainer.Resolve(Type t, String name) at Microsoft.Practices.Composite.UnityExtensions.UnityServiceLocatorAdapter.DoGetInstance(Type serviceType, String key) at Microsoft.Practices.ServiceLocation.ServiceLocatorImplBase.GetInstance(Type serviceType, String key) 4) Exception Information Exception Type: Microsoft.Practices.ObjectBuilder2.BuildFailedException ExecutingStrategyTypeName: BuildPlanStrategy ExecutingStrategyIndex: 3 BuildKey: Build Key[Infinity.Modules.Admin.AdminModule, null] Message: The current build operation (build key Build Key[Infinity.Modules.Admin.AdminModule, null]) failed: The parameter screenFactoryRegistry could not be resolved when attempting to call constructor Infinity.Modules.Admin.AdminModule(Microsoft.Practices.Unity.IUnityContainer container, PhoenixIT.IScreenFactoryRegistry screenFactoryRegistry, Microsoft.Practices.Composite.Events.IEventAggregator eventAggregator, PhoenixIT.IBusyService busyService). (Strategy type BuildPlanStrategy, index 3) Data: System.Collections.ListDictionaryInternal TargetSite: System.Object ExecuteBuildUp(Microsoft.Practices.ObjectBuilder2.IBuilderContext) HelpLink: NULL Source: Microsoft.Practices.ObjectBuilder2 StackTrace Information at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.Builder.BuildUp(IReadWriteLocator locator, ILifetimeContainer lifetime, IPolicyList policies, IStrategyChain strategies, Object buildKey, Object existing) at Microsoft.Practices.Unity.UnityContainer.DoBuildUp(Type t, Object existing, String name) 5) Exception Information Exception Type: System.InvalidOperationException Message: The parameter screenFactoryRegistry could not be resolved when attempting to call constructor Infinity.Modules.Admin.AdminModule(Microsoft.Practices.Unity.IUnityContainer container, PhoenixIT.IScreenFactoryRegistry screenFactoryRegistry, Microsoft.Practices.Composite.Events.IEventAggregator eventAggregator, PhoenixIT.IBusyService busyService). Data: System.Collections.ListDictionaryInternal TargetSite: Void ThrowForResolutionFailed(System.Exception, System.String, System.String, Microsoft.Practices.ObjectBuilder2.IBuilderContext) HelpLink: NULL Source: Microsoft.Practices.ObjectBuilder2 StackTrace Information at Microsoft.Practices.ObjectBuilder2.DynamicMethodConstructorStrategy.ThrowForResolutionFailed(Exception inner, String parameterName, String constructorSignature, IBuilderContext context) at BuildUp_Infinity.Modules.Admin.AdminModule(IBuilderContext ) at Microsoft.Practices.ObjectBuilder2.DynamicMethodBuildPlan.BuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.BuildPlanStrategy.PreBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) 6) Exception Information Exception Type: Microsoft.Practices.ObjectBuilder2.BuildFailedException ExecutingStrategyTypeName: BuildPlanStrategy ExecutingStrategyIndex: 3 BuildKey: Build Key[PhoenixIT.IScreenFactoryRegistry, null] Message: The current build operation (build key Build Key[PhoenixIT.IScreenFactoryRegistry, null]) failed: The current type, PhoenixIT.IScreenFactoryRegistry, is an interface and cannot be constructed. Are you missing a type mapping? (Strategy type BuildPlanStrategy, index 3) Data: System.Collections.ListDictionaryInternal TargetSite: System.Object ExecuteBuildUp(Microsoft.Practices.ObjectBuilder2.IBuilderContext) HelpLink: NULL Source: Microsoft.Practices.ObjectBuilder2 StackTrace Information at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) at Microsoft.Practices.Unity.ObjectBuilder.NamedTypeDependencyResolverPolicy.Resolve(IBuilderContext context) at BuildUp_Infinity.Modules.Admin.AdminModule(IBuilderContext ) 7) Exception Information Exception Type: System.InvalidOperationException Message: The current type, PhoenixIT.IScreenFactoryRegistry, is an interface and cannot be constructed. Are you missing a type mapping? Data: System.Collections.ListDictionaryInternal TargetSite: Void ThrowForAttemptingToConstructInterface(Microsoft.Practices.ObjectBuilder2.IBuilderContext) HelpLink: NULL Source: Microsoft.Practices.ObjectBuilder2 StackTrace Information at Microsoft.Practices.ObjectBuilder2.DynamicMethodConstructorStrategy.ThrowForAttemptingToConstructInterface(IBuilderContext context) at BuildUp_PhoenixIT.IScreenFactoryRegistry(IBuilderContext ) at Microsoft.Practices.ObjectBuilder2.DynamicMethodBuildPlan.BuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.BuildPlanStrategy.PreBuildUp(IBuilderContext context) at Microsoft.Practices.ObjectBuilder2.StrategyChain.ExecuteBuildUp(IBuilderContext context) For more information, see Help and Support Center at http://go.microsoft.com/fwlink/events.asp.

    Read the article

  • Entity Framework 4.0 and DDD patterns

    - by Voice
    Hi everybody I use EntityFramework as ORM and I have simple POCO Domain Model with two base classes that represent Value Object and Entity Object Patterns (Evans). These two patterns is all about equality of two objects, so I overrode Equals and GetHashCode methods. Here are these two classes: public abstract class EntityObject<T>{ protected T _ID = default(T); public T ID { get { return _ID; } protected set { _ID = value; } } public sealed override bool Equals(object obj) { EntityObject<T> compareTo = obj as EntityObject<T>; return (compareTo != null) && ((HasSameNonDefaultIdAs(compareTo) || (IsTransient && compareTo.IsTransient)) && HasSameBusinessSignatureAs(compareTo)); } public virtual void MakeTransient() { _ID = default(T); } public bool IsTransient { get { return _ID == null || _ID.Equals(default(T)); } } public override int GetHashCode() { if (default(T).Equals(_ID)) return 0; return _ID.GetHashCode(); } private bool HasSameBusinessSignatureAs(EntityObject<T> compareTo) { return ToString().Equals(compareTo.ToString()); } private bool HasSameNonDefaultIdAs(EntityObject<T> compareTo) { return (_ID != null && !_ID.Equals(default(T))) && (compareTo._ID != null && !compareTo._ID.Equals(default(T))) && _ID.Equals(compareTo._ID); } public override string ToString() { StringBuilder str = new StringBuilder(); str.Append(" Class: ").Append(GetType().FullName); if (!IsTransient) str.Append(" ID: " + _ID); return str.ToString(); } } public abstract class ValueObject<T, U> : IEquatable<T> where T : ValueObject<T, U> { private static List<PropertyInfo> Properties { get; set; } private static Func<ValueObject<T, U>, PropertyInfo, object[], object> _GetPropValue; static ValueObject() { Properties = new List<PropertyInfo>(); var propParam = Expression.Parameter(typeof(PropertyInfo), "propParam"); var target = Expression.Parameter(typeof(ValueObject<T, U>), "target"); var indexPar = Expression.Parameter(typeof(object[]), "indexPar"); var call = Expression.Call(propParam, typeof(PropertyInfo).GetMethod("GetValue", new[] { typeof(object), typeof(object[]) }), new[] { target, indexPar }); var lambda = Expression.Lambda<Func<ValueObject<T, U>, PropertyInfo, object[], object>>(call, target, propParam, indexPar); _GetPropValue = lambda.Compile(); } public U ID { get; protected set; } public override Boolean Equals(Object obj) { if (ReferenceEquals(null, obj)) return false; if (obj.GetType() != GetType()) return false; return Equals(obj as T); } public Boolean Equals(T other) { if (ReferenceEquals(null, other)) return false; if (ReferenceEquals(this, other)) return true; foreach (var property in Properties) { var oneValue = _GetPropValue(this, property, null); var otherValue = _GetPropValue(other, property, null); if (null == oneValue && null == otherValue) return false; if (false == oneValue.Equals(otherValue)) return false; } return true; } public override Int32 GetHashCode() { var hashCode = 36; foreach (var property in Properties) { var propertyValue = _GetPropValue(this, property, null); if (null == propertyValue) continue; hashCode = hashCode ^ propertyValue.GetHashCode(); } return hashCode; } public override String ToString() { var stringBuilder = new StringBuilder(); foreach (var property in Properties) { var propertyValue = _GetPropValue(this, property, null); if (null == propertyValue) continue; stringBuilder.Append(propertyValue.ToString()); } return stringBuilder.ToString(); } protected static void RegisterProperty(Expression<Func<T, Object>> expression) { MemberExpression memberExpression; if (ExpressionType.Convert == expression.Body.NodeType) { var body = (UnaryExpression)expression.Body; memberExpression = body.Operand as MemberExpression; } else memberExpression = expression.Body as MemberExpression; if (null == memberExpression) throw new InvalidOperationException("InvalidMemberExpression"); Properties.Add(memberExpression.Member as PropertyInfo); } } Everything was OK until I tried to delete some related objects (aggregate root object with two dependent objects which was marked for cascade deletion): I've got an exception "The relationship could not be changed because one or more of the foreign-key properties is non-nullable". I googled this and found http://blog.abodit.com/2010/05/the-relationship-could-not-be-changed-because-one-or-more-of-the-foreign-key-properties-is-non-nullable/ I changed GetHashCode to base.GetHashCode() and error disappeared. But now it breaks all my code: I can't override GetHashCode for my POCO objects = I can't override Equals = I can't implement Value Object and Entity Object patters for my POCO objects. So, I appreciate any solutions, workarounds here etc.

    Read the article

  • Using NHibernate Criteria API to select sepcific set of data together with a count

    - by mfloryan
    I have the following domain set up for persistence with NHibernate: I am using the PaperConfiguration as the root aggregate. I want to select all PaperConfiguration objects for a given Tier and AcademicYearConfiguration. This works really well as per the following example: ICriteria criteria = session.CreateCriteria<PaperConfiguration>() .Add(Restrictions.Eq("AcademicYearConfiguration", configuration)) .CreateCriteria("Paper") .CreateCriteria("Unit") .CreateCriteria("Tier") .Add(Restrictions.Eq("Id", tier.Id)) return criteria.List<PaperConfiguration>(); (Perhaps there is a better way of doing this though). Yet also need to know how many ReferenceMaterials there are for each PaperConfiguration and I would like to get it in the same call. Avoid HQL - I already have an HQL solution for it. I know this is what projections are for and this question suggests an idea but I can't get it to work. I have a PaperConfigurationView that has, instead of IList<ReferenceMaterial> ReferenceMaterials the ReferenceMaterialCount and was thinking along the lines of ICriteria criteria = session.CreateCriteria<PaperConfiguration>() .Add(Restrictions.Eq("AcademicYearConfiguration", configuration)) .CreateCriteria("Paper") .CreateCriteria("Unit") .CreateCriteria("Tier") .Add(Restrictions.Eq("Id", tier.Id)) .SetProjection( Projections.ProjectionList() .Add(Projections.Property("IsSelected"), "IsSelected") .Add(Projections.Property("Paper"), "Paper") // and so on for all relevant properties .Add(Projections.Count("ReferenceMaterials"), "ReferenceMaterialCount") .SetResultTransformer(Transformers.AliasToBean<PaperConfigurationView>()); return criteria.List< PaperConfigurationView >(); unfortunately this does not work. What am I doing wrong? The following simplified query: ICriteria criteria = session.CreateCriteria<PaperConfiguration>() .CreateCriteria("ReferenceMaterials") .SetProjection( Projections.ProjectionList() .Add(Projections.Property("Id"), "Id") .Add(Projections.Count("ReferenceMaterials"), "ReferenceMaterialCount") ).SetResultTransformer(Transformers.AliasToBean<PaperConfigurationView>()); return criteria.List< PaperConfigurationView >(); creates this rather unexpected SQL: SELECT this_.Id as y0_, count(this_.Id) as y1_ FROM Domain.PaperConfiguration this_ inner join Domain.ReferenceMaterial referencem1_ on this_.Id=referencem1_.PaperConfigurationId The above query fails with ADO.NET error as it obviously is not a correct SQL since it is missing a group by or the count being count(referencem1_.Id) rather than (this_.Id). NHibernate mappings: <class name="PaperConfiguration" table="PaperConfiguration"> <id name="Id" type="Int32"> <column name="Id" sql-type="int" not-null="true" unique="true" index="PK_PaperConfiguration"/> <generator class="native" /> </id> <!-- IPersistent --> <version name="VersionLock" /> <!-- IAuditable --> <property name="WhenCreated" type="DateTime" /> <property name="CreatedBy" type="String" length="50" /> <property name="WhenChanged" type="DateTime" /> <property name="ChangedBy" type="String" length="50" /> <property name="IsEmeEnabled" type="boolean" not-null="true" /> <property name="IsSelected" type="boolean" not-null="true" /> <many-to-one name="Paper" column="PaperId" class="Paper" not-null="true" access="field.camelcase"/> <many-to-one name="AcademicYearConfiguration" column="AcademicYearConfigurationId" class="AcademicYearConfiguration" not-null="true" access="field.camelcase"/> <bag name="ReferenceMaterials" generic="true" cascade="delete" lazy="true" inverse="true"> <key column="PaperConfigurationId" not-null="true" /> <one-to-many class="ReferenceMaterial" /> </bag> </class> <class name="ReferenceMaterial" table="ReferenceMaterial"> <id name="Id" type="Int32"> <column name="Id" sql-type="int" not-null="true" unique="true" index="PK_ReferenceMaterial"/> <generator class="native" /> </id> <!-- IPersistent --> <version name="VersionLock" /> <!-- IAuditable --> <property name="WhenCreated" type="DateTime" /> <property name="CreatedBy" type="String" length="50" /> <property name="WhenChanged" type="DateTime" /> <property name="ChangedBy" type="String" length="50" /> <property name="Name" type="String" not-null="true" /> <property name="ContentFile" type="String" not-null="false" /> <property name="Position" type="int" not-null="false" /> <property name="CommentaryName" type="String" not-null="false" /> <property name="CommentarySubjectTask" type="String" not-null="false" /> <property name="CommentaryPointScore" type="String" not-null="false" /> <property name="CommentaryContentFile" type="String" not-null="false" /> <many-to-one name="PaperConfiguration" column="PaperConfigurationId" class="PaperConfiguration" not-null="true"/> </class>

    Read the article

  • RESTful resource not found. 404 or 204? Jersey returns 204 on null being returned from handler.

    - by jr
    If you are looking for /Resource/Id and that resource does not exist, I had always though that 404 was the appropriate response. However, when returning "null" from a Jersey handler, I get back a "204 No Content". I can likely work with either one, but am curious to others thoughts on this. To answer my own next question. To get jersey to return 404 you must throw an exception. if (a == null) throw new WebApplicationException(404);

    Read the article

< Previous Page | 70 71 72 73 74 75 76 77 78 79 80 81  | Next Page >