Search Results

Search found 5140 results on 206 pages for 'graphical models'.

Page 61/206 | < Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >

  • Does MVC replace traditional manually created UI, BLL, DAL ?

    - by used2could
    I'm use to creating the UI, BLL, DAL by hand (some times i've used LINQ-SQL or SubSonic for the DAL). I've done several small projects using MVC since it's release. On these projects i've still continued to write a BLL and DAL by hand and then incorporate those into the MVC's models/controllers. Looking to optimize my time on projects this seems like over kill and a potential waste of time. My question is: Would it be acceptable to roll a DAL such as SubSonic and directly use it in the Models/Controllers of my MVC web app? Now the models & controllers would act as the BLL. I just see this as a major time savor to not have to worry about another tier. (Agree ? "Please state way" : "Make argument")

    Read the article

  • QuestionOrAnswer model?

    - by Mark
    My site has Listings. Users can ask Questions about listings, and the author of the listing can respond with an Answer. However, the Answer might need clarification, so I've made them recursive (you can "answer" an answer). So how do I set up the database? The way I have it now looks like this (in Django-style models): class QuestionOrAnswer(Model): user = ForeignKey(User, related_name='questions') listing = ForeignKey(Listing, related_name='questions') parent = models.ForeignKey('self', null=True, blank=True, related_name='children') message = TextField() But what bugs me is that listing is now an attribute of the answers as well (it doesn't need to be). What happens if the database gets mangled and an answer belongs to a different listing than its parent question? That just doesn't make any sense. We can separate it with polymorphism: QuestionOrAnswer user message created updated Question(QuestionOrAnswer) shipment Answer(QuestionOrAnswer) parent = ForeignKey(QuestionOrAnswer) And that ought to work, but now ever question and answer is split into 2 tables. Is it worth this overhead for clearly defined models?

    Read the article

  • what is wrong with this insert method in asp.net mvc?

    - by Pandiya Chendur
    My controller calls a repository class method on insert, [AcceptVerbs(HttpVerbs.Post)] public ActionResult Create([Bind(Exclude = "Id")]FormCollection collection) { try { MaterialsObj materialsObj = new MaterialsObj(); materialsObj.Mat_Name = collection["Mat_Name"]; materialsObj.Mes_Id = Convert.ToInt64(collection["MeasurementType"]); materialsObj.Mes_Name = collection["Mat_Type"]; materialsObj.CreatedDate = System.DateTime.Now; materialsObj.CreatedBy = Convert.ToInt64(1); materialsObj.IsDeleted = Convert.ToInt64(1); consRepository.createMaterials(materialsObj); return RedirectToAction("Index"); } catch { return View(); } } and my repository class has this, public MaterialsObj createMaterials(MaterialsObj materialsObj) { db.Materials.InsertOnSubmit(materialsObj); return materialsObj; } But when i compile this i get The best overloaded method match for 'System.Data.Linq.Table<CrMVC.Models.Material>.InsertOnSubmit(CrMVC.Models.Material)' has some invalid arguments... cannot convert from 'CrMVC.BusinessObjects.MaterialsObj' to 'CrMVC.Models.Material'.. am i missing something?

    Read the article

  • How RPG characters are made

    - by user365314
    If RPG with the ability to change armors and clothes are made, how is it done? I mean the 3d side mostly If i make normal character, that has flat clothes, it would be easy, just change textures, but question is about armors, which have totally different models. So are only armor models recreated or character model with armor? How is it imported into game engine, only armor or character model with new armor? If person changes armor in game, will game swap the hole model or only the armor part? if only the armor part, then how the movement animations are done, are armor models animated on characters in 3d programs or what... :D

    Read the article

  • spring-nullpointerexception- cant access autowired annotated service (or dao) in a no-annotations class

    - by user286806
    I have this problem that I cannot fix. From my @Controller, i can easily access my autowired @Service class and play with it no problem. But when I do that from a separate class without annotations, it gives me a NullPointerException. My Controller (works)- @Controller public class UserController { @Autowired UserService userService;... My separate Java class (not working)- public final class UsersManagementUtil { @Autowired UserService userService; or @Autowired UserDao userDao; userService or userDao are always null! Was just trying if any one of them works. My component scan setting has the root level package set for scanning so that should be OK. my servlet context - <?xml version="1.0" encoding="UTF-8"?> <beans xmlns="http://www.springframework.org/schema/beans" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:context="http://www.springframework.org/schema/context" xmlns:tx="http://www.springframework.org/schema/tx" xsi:schemaLocation=" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd http://www.springframework.org/schema/tx http://www.springframework.org/schema/tx/spring-tx-3.0.xsd"> <!-- the application context definition for the springapp DispatcherServlet --> <!-- Enable annotation driven controllers, validation etc... --> <context:property-placeholder location="classpath:jdbc.properties" /> <context:component-scan base-package="x" /> <tx:annotation-driven transaction-manager="hibernateTransactionManager" /> <!-- package shortended --> <bean id="messageSource" class="o.s.c.s.ReloadableResourceBundleMessageSource"> <property name="basename" value="/WEB-INF/messages" /> </bean> <bean id="dataSource" class="org.springframework.jdbc.datasource.DriverManagerDataSource"> <property name="driverClassName" value="${database.driver}" /> <property name="url" value="${database.url}" /> <property name="username" value="${database.user}" /> <property name="password" value="${database.password}" /> </bean> <!-- package shortened --> <bean id="viewResolver" class="o.s.w.s.v.InternalResourceViewResolver"> <property name="prefix"> <value>/</value> </property> <property name="suffix"> <value>.jsp</value> </property> <property name="order"> <value>0</value> </property> </bean> <!-- package shortened --> <bean id="sessionFactory" class="o.s.o.h3.a.AnnotationSessionFactoryBean"> <property name="dataSource" ref="dataSource" /> <property name="annotatedClasses"> <list> <value>orion.core.models.Question</value> <value>orion.core.models.User</value> <value>orion.core.models.Space</value> <value>orion.core.models.UserSkill</value> <value>orion.core.models.Question</value> <value>orion.core.models.Rating</value> </list> </property> <property name="hibernateProperties"> <props> <prop key="hibernate.dialect">${hibernate.dialect}</prop> <prop key="hibernate.show_sql">${hibernate.show_sql}</prop> <prop key="hibernate.hbm2ddl.auto">${hibernate.hbm2ddl.auto}</prop> </props> </property> </bean> <bean id="hibernateTransactionManager" class="org.springframework.orm.hibernate3.HibernateTransactionManager"> <property name="sessionFactory" ref="sessionFactory" /> </bean> Any clue?

    Read the article

  • Polymorphic associations in CakePHP2

    - by Joseph
    I have 3 models, Page , Course and Content Page and Course contain meta data and Content contains HTML content. Page and Course both hasMany Content Content belongsTo Page and Course To avoid having page_id and course_id fields in Content (because I want this to scale to more than just 2 models) I am looking at using Polymorphic Associations. I started by using the Polymorphic Behavior in the Bakery but it is generating waaay too many SQL queries for my liking and it's also throwing an "Illegal Offset" error which I don't know how to fix (it was written in 2008 and nobody seems to have referred to it recently so perhaps the error is due to it not having been designed for Cake 2?) Anyway, I've found that I can almost do everything I need by hardcoding the associations in the models as such: Page Model CREATE TABLE `pages` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(255) COLLATE utf8_unicode_ci NOT NULL, `slug` varchar(255) COLLATE utf8_unicode_ci NOT NULL, `created` datetime NOT NULL, `updated` datetime NOT NULL, PRIMARY KEY (`id`) ) <?php class Page extends AppModel { var $name = 'Page'; var $hasMany = array( 'Content' => array( 'className' => 'Content', 'foreignKey' => 'foreign_id', 'conditions' => array('Content.class' => 'Page'), ) ); } ?> Course Model CREATE TABLE `courses` ( `id` int(11) NOT NULL AUTO_INCREMENT, `title` varchar(255) COLLATE utf8_unicode_ci NOT NULL, `slug` varchar(255) COLLATE utf8_unicode_ci NOT NULL, `created` datetime NOT NULL, `updated` datetime NOT NULL, PRIMARY KEY (`id`) ) <?php class Course extends AppModel { var $name = 'Course'; var $hasMany = array( 'Content' => array( 'className' => 'Content', 'foreignKey' => 'foreign_id', 'conditions' => array('Content.class' => 'Course'), ) ); } ?> Content model CREATE TABLE IF NOT EXISTS `contents` ( `id` int(11) unsigned NOT NULL AUTO_INCREMENT, `class` varchar(30) COLLATE utf8_unicode_ci NOT NULL, `foreign_id` int(11) unsigned NOT NULL, `title` varchar(100) COLLATE utf8_unicode_ci NOT NULL, `content` text COLLATE utf8_unicode_ci NOT NULL, `created` datetime DEFAULT NULL, `modified` datetime DEFAULT NULL, PRIMARY KEY (`id`) ) <?php class Content extends AppModel { var $name = 'Content'; var $belongsTo = array( 'Page' => array( 'foreignKey' => 'foreign_id', 'conditions' => array('Content.class' => 'Page') ), 'Course' => array( 'foreignKey' => 'foreign_id', 'conditions' => array('Content.class' => 'Course') ) ); } ?> The good thing is that $this->Content->find('first') only generates a single SQL query instead of 3 (as was the case with the Polymorphic Behavior) but the problem is that the dataset returned includes both of the belongsTo models, whereas it should only really return the one that exists. Here's how the returned data looks: array( 'Content' => array( 'id' => '1', 'class' => 'Course', 'foreign_id' => '1', 'title' => 'something about this course', 'content' => 'The content here', 'created' => null, 'modified' => null ), 'Page' => array( 'id' => null, 'title' => null, 'slug' => null, 'created' => null, 'updated' => null ), 'Course' => array( 'id' => '1', 'title' => 'Course name', 'slug' => 'name-of-the-course', 'created' => '2012-10-11 00:00:00', 'updated' => '2012-10-11 00:00:00' ) ) I only want it to return one of either Page or Course depending on which one is specified in Content.class UPDATE: Combining the Page and Course models would seem like the obvious solution to this problem but the schemas I have shown above are just shown for the purpose of this question. The actual schemas are actually very different in terms of their fields and the each have a different number of associations with other models too. UPDATE 2 Here is the query that results from running $this->Content->find('first'); : SELECT `Content`.`id`, `Content`.`class`, `Content`.`foreign_id`, `Content`.`title`, `Content`.`slug`, `Content`.`content`, `Content`.`created`, `Content`.`modified`, `Page`.`id`, `Page`.`title`, `Page`.`slug`, `Page`.`created`, `Page`.`updated`, `Course`.`id`, `Course`.`title`, `Course`.`slug`, `Course`.`created`, `Course`.`updated` FROM `cakedb`.`contents` AS `Content` LEFT JOIN `cakedb`.`pages` AS `Page` ON (`Content`.`foreign_id` = `Page`.`id` AND `Content`.`class` = 'Page') LEFT JOIN `cakedb`.`courses` AS `Course` ON (`Content`.`foreign_id` = `Course`.`id` AND `Content`.`class` = 'Course') WHERE 1 = 1 LIMIT 1

    Read the article

  • Django: Extending User Model - Inline User fields in UserProfile

    - by Jack Sparrow
    Is there a way to display User fields under a form that adds/edits a UserProfile model? I am extending default Django User model like this: class UserProfile(models.Model): user = models.OneToOneField(User, unique=True) about = models.TextField(blank=True) I know that it is possible to make a: class UserProfileInlineAdmin(admin.TabularInline): and then inline this in User ModelAdmin but I want to achieve the opposite effect, something like inverse inlining, displaying the fields of the model pointed by the OneToOne Relationship (User) in the page of the model defining the relationship (UserProfile). I don't care if it would be in the admin or in a custom view/template. I just need to know how to achieve this. I've been struggling with ModelForms and Formsets, I know the answer is somewhere there, but my little experience in Django doesn't allow me to come up with the solution yet. A little example would be really helpful!

    Read the article

  • Overriding the default error message for a ModelForm

    - by Jude Osborn
    Is there any way to override a error_message text for all the fields of a ModelForm's, without having to include all the field info in the ModelForm? For example, let's say I have a (very simple) model like this: People(models.Model): name = models.CharField(max_length=128, null=True, blank=True, help_text="Please type your name.") age = models.IntegerField(help_text="Please type your age.") I don't like the cut and dry default messages, such as, "Enter a whole number.", so I'd like to change them to something a bit nicer like "Please type a number." Ideally I'd be able to add an "error_message" property in the model, but the model does not support that property. So does that mean I have to basically duplicate all the model info in my ModelForm, or is there a way around that?

    Read the article

  • Reading file data during form's clean method

    - by Dominic Rodger
    So, I'm working on implementing the answer to my previous question. Here's my model: class Talk(models.Model): title = models.CharField(max_length=200) mp3 = models.FileField(upload_to = u'talks/', max_length=200) Here's my form: class TalkForm(forms.ModelForm): def clean(self): super(TalkForm, self).clean() cleaned_data = self.cleaned_data if u'mp3' in self.files: from mutagen.mp3 import MP3 if hasattr(self.files['mp3'], 'temporary_file_path'): audio = MP3(self.files['mp3'].temporary_file_path()) else: # What goes here? audio = None # setting to None for now ... return cleaned_data class Meta: model = Talk Mutagen needs file-like objects - the first case (where the uploaded file is larger than the size of file handled in memory) works fine, but I don't know how to handle InMemoryUploadedFile that I get otherwise. I've tried: # TypeError (coercing to Unicode: need string or buffer, InMemoryUploadedFile found) audio = MP3(self.files['mp3']) # TypeError (coercing to Unicode: need string or buffer, cStringIO.StringO found) audio = MP3(self.files['mp3'].file) # Hangs seemingly indefinitely audio = MP3(self.files['mp3'].file.read()) Is there something wrong with mutagen, or am I doing it wrong?

    Read the article

  • How can I update only certain fields in a Django model form?

    - by J. Frankenstein
    I have a model form that I use to update a model. class Turtle(models.Model): name = models.CharField(max_length=50, blank=False) description = models.TextField(blank=True) class TurtleForm(forms.ModelForm): class Meta: model = Turtle Sometimes I don't need to update the entire model, but only want to update one of the fields. So when I POST the form only has information for the description. When I do that the model never saves because it thinks that the name is being blanked out while my intent is that the name not change and just be used from the model. turtle_form = TurtleForm(request.POST, instance=object) if turtle_form.is_valid(): turtle_form.save() Is there any way to make this happen? Thanks!

    Read the article

  • Does MVC replace traditional manually created BLL?

    - by used2could
    I'm used to creating the UI, BLL, DAL by hand (some times I've used LINQ-to-SQL or SubSonic for the DAL). I've done several small projects using MVC since its release. On these projects I've still continued to write a BLL and DAL by hand and then incorporate those into the MVC's models/controllers. I'm looking to optimize my time on projects this seems like overkill and a potential waste of time. Question Would it be acceptable to roll a DAL such as SubSonic and directly use it in the Models/Controllers of my MVC web app? Now the models & controllers would act as the BLL. I just see this as a major timesaver to not have to worry about another tier. UPDATE: I just wanted to add that my concern isn't really with the DAL (I frequently use SubSonic and NH) but rather focus on the BLL. Sorry for the confusion.

    Read the article

  • How to create a nested form in backbone relational?

    - by jebek
    I would like to be able to create nested models at the same time in backbone. I know how to use backbone relational to create the parent model. Then once it is saved, I can create child models through backbone relational. However, I want to be able to create both the parent and child models at the same time, which might not be possible because I can only create the child model once the parent model has already been created. For example, let's say I was creating a forum like the one from the awesome backbone relational tutorial - http://antoviaque.org/docs/tutorials/backbone-relational-tutorial/. I would want to create a thread and a message at the same time(through the click of a single button) rather than create a thread then a message. Is this possible? Is there a better way of doing this that I'm not thinking of?

    Read the article

  • Django forms: prepopulate form with request.user and url parameter

    - by Malyo
    I'm building simple Course Management App. I want Users to sign up for Course. Here's sign up model: class CourseMembers(models.Model): student = models.ForeignKey(Student) course = models.ForeignKey(Course) def __unicode__(self): return unicode(self.student) Student model is extended User model - I'd like to fill the form with request.user. In Course model most important is course_id, which i'm passing into view throught URL parameter (for example http://127.0.0.1:8000/courses/course/1/). What i want to achieve, is to generate 'invisible' (so user can't change the inserted data) form with just input, but containing request.user and course_id parameter.

    Read the article

  • How do I remove a repository of yum

    - by sunil
    When I search for a package in yum(centos 6), it tries to search in a repro named 'c6-media' And it gives a bunch of errors as follows file:///media/CentOS/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/CentOS/repodata/repomd.xml Trying other mirror. file:///media/cdrecorder/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/cdrecorder/repodata/repomd.xml Trying other mirror. file:///media/cdrom/repodata/repomd.xml: [Errno 14] Could not open/read file:///media/cdrom/repodata/repomd.xml Trying other mirror. Error: Cannot retrieve repository metadata (repomd.xml) for repository: c6-media. Please verify its path and try again Obviously the error seems to say that yum is trying to search for the CD/DVD which installed the OS. I do not have it now. All I want to do now is to delete this repository from yum. I went to the package manager graphical tool and removed this from the sources. Seems yum and the graphical tool do not use the same config. This is just my guess.

    Read the article

  • Windows screen shots via command-line SSH session

    - by Geoff Fritz
    I've browsed the handful of "screen capture" queries here, but I was unable to find anything which addressed my specific need. I'm looking for a command-line tool that I can run via remote SSH connection (by way of the cygwin sshd daemon). There are several to choose from, but the few I've tried (ImageMagick, nircmd, and MiniCap) all result in a blank screen. I assume that this is due to the remotely logged in user not having a proper graphical console session running. The goal here is automate screen capture and retrieval of the main system console (what one would see if they were looking at the physical monitor) through the use of ssh script from a Unix host: ssh user@windowshost "screencap --output /tmp/console.jpg" scp user@windowshost:/tmp/console.jpg /some/destdir Note that these must be done on demand, so polling a remote directory that has snapshots dumped periodically will not work. Bonus points for programs that are open source and have a portable install (so I don't need to RDP/VNC into the machine to run a graphical installer).

    Read the article

  • Ultra-lightweight web browser?

    - by zildjohn01
    Are there any good super-lightweight graphical web browsers out there? I'd like to be able to browse the web on an old PC, but the mainstream crop of browsers is just too heavy, and I don't want to resort to something like Lynx. There must be something decent out there that'll fit in 16 or 32MB of RAM comfortably. 100% standards compliance isn't necessary, but I'd like something that supports the most widely used parts of CSS and JavaScript. The goal is to get 98% of sites usable in a nice, graphical format.

    Read the article

  • Ultra-lightweight web browser?

    - by zildjohn01
    Are there any good super-lightweight graphical web browsers out there? I'd like to be able to browse the web on an old PC, but the mainstream crop of browsers is just too heavy, and I don't want to resort to something like Lynx. There must be something decent out there that'll fit in 16 or 32MB of RAM comfortably. 100% standards compliance isn't necessary, but I'd like something that supports the most widely used parts of CSS and JavaScript. The goal is to get 98% of sites usable in a nice, graphical format.

    Read the article

  • Is it bad practice to apply class-based design to JavaScript programs?

    - by helixed
    JavaScript is a prototyped-based language, and yet it has the ability to mimic some of the features of class-based object-oriented languages. For example, JavaScript does not have a concept of public and private members, but through the magic of closures, it's still possible to provide the same functionality. Similarly, method overloading, interfaces, namespaces and abstract classes can all be added in one way or another. Lately, as I've been programming in JavaScript, I've felt like I'm trying to turn it into a class-based language instead of using it in the way it's meant to be used. It seems like I'm trying to force the language to conform to what I'm used to. The following is some JavaScript code I've written recently. It's purpose is to abstract away some of the effort involved in drawing to the HTML5 canvas element. /* Defines the Drawing namespace. */ var Drawing = {}; /* Abstract base which represents an element to be drawn on the screen. @param The graphical context in which this Node is drawn. @param position The position of the center of this Node. */ Drawing.Node = function(context, position) { return { /* The method which performs the actual drawing code for this Node. This method must be overridden in any subclasses of Node. */ draw: function() { throw Exception.MethodNotOverridden; }, /* Returns the graphical context for this Node. @return The graphical context for this Node. */ getContext: function() { return context; }, /* Returns the position of this Node. @return The position of this Node. */ getPosition: function() { return position; }, /* Sets the position of this Node. @param thePosition The position of this Node. */ setPosition: function(thePosition) { position = thePosition; } }; } /* Define the shape namespace. */ var Shape = {}; /* A circle shape implementation of Drawing.Node. @param context The graphical context in which this Circle is drawn. @param position The center of this Circle. @param radius The radius of this circle. @praram color The color of this circle. */ Shape.Circle = function(context, position, radius, color) { //check the parameters if (radius < 0) throw Exception.InvalidArgument; var node = Drawing.Node(context, position); //overload the node drawing method node.draw = function() { var context = this.getContext(); var position = this.getPosition(); context.fillStyle = color; context.beginPath(); context.arc(position.x, position.y, radius, 0, Math.PI*2, true); context.closePath(); context.fill(); } /* Returns the radius of this Circle. @return The radius of this Circle. */ node.getRadius = function() { return radius; }; /* Sets the radius of this Circle. @param theRadius The new radius of this circle. */ node.setRadius = function(theRadius) { radius = theRadius; }; /* Returns the color of this Circle. @return The color of this Circle. */ node.getColor = function() { return color; }; /* Sets the color of this Circle. @param theColor The new color of this Circle. */ node.setColor = function(theColor) { color = theColor; }; //return the node return node; }; The code works exactly like it should for a user of Shape.Circle, but it feels like it's held together with Duct Tape. Can somebody provide some insight on this?

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • New SSIS tool on Codeplex – SSIS Log Analyzer

    I stumbled across a new SSIS tool on Codeplex today, the SSIS Log Analyzer which was only released a few days ago. Whilst it is a beta release and currently only supports 2005 (2008 is promised) it looks quite interesting. It seems to be a fancy log viewer, but with some clever features and a nice looking front-end. I’ve only read the documentation so far, but it has graphs and a debug view that shows your package with the colour animations similar to when debugging in BIDS, and everyone knows, the way the pretty colours and numbers change is the best bit! I’ll quote some of the features for you here and then let you make your own mind up, is it useful in the real world? Option to analyze the logs manually by applying row and column filters over the log data or by using queries to specify more complex criterions. Automated Performance Analysis which provides a quick graphical look on which tasks spent most time during package execution. Rerun (debug) the entire sequence of events which happened during package execution showing the flow of control in graphical form, changes in runtime values for each task like execution duration etc. Support for Auto Analyzers to automatically find out issues and provide suggestions for problems which can be figured out with the help of SSIS logs and/or package. Option to analyze just log file or log and package together. Provides a lightweight environment to have a quick look at the package. Opening it in BIDS takes some time as being an authoring environment it does all sorts of validations resulting in some delay. See http://ssisloganalyzer.codeplex.com/  for more details.

    Read the article

  • AdSense Mobile Interface – I’m Loving It!

    - by Gopinath
    I love checking AdSense earnings every day on my mobile. All these days my mobile browser, opera, rendered the heavy desktop version of AdSense interface and it was tough to navigate around and see the earnings. To solve the problems of me as well as millions of other AdSense users, Google yesterday released a mobile version of AdSense user interface that works on almost all the mobile platforms – iOS, Android, Windows Phone 7, Symbian and many others. If you have opted for the new beta user interface of AdSense, you will be presented with the mobile version when you https://www.google.com/adsense on your mobile. Here is a screen grab of how looks like on iPhone and Android device.It looks similar on my Nokia mobile too. The Adsense interface for mobile is very nice – on the home page I can quickly have a look at today’s earnings, recent payment amount, last month finalized amount and the total unpaid balances. The quick reports option available the bottom of the screen lets me access a graphical view of useful earnings reports like – Last 7 days, Last 30 days, This Month and Last Month. You can also create your own reports and save them to this list for quick viewing. To view the graphical reports, you don’t need FLASH on your mobile. For more details check out the official post on Google Adsense blog. This article titled,AdSense Mobile Interface – I’m Loving It!, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • So…is it a Seek or a Scan?

    - by Paul White
    You’re probably most familiar with the terms ‘Seek’ and ‘Scan’ from the graphical plans produced by SQL Server Management Studio (SSMS).  The image to the left shows the most common ones, with the three types of scan at the top, followed by four types of seek.  You might look to the SSMS tool-tip descriptions to explain the differences between them: Not hugely helpful are they?  Both mention scans and ranges (nothing about seeks) and the Index Seek description implies that it will not scan the index entirely (which isn’t necessarily true). Recall also yesterday’s post where we saw two Clustered Index Seek operations doing very different things.  The first Seek performed 63 single-row seeking operations; and the second performed a ‘Range Scan’ (more on those later in this post).  I hope you agree that those were two very different operations, and perhaps you are wondering why there aren’t different graphical plan icons for Range Scans and Seeks?  I have often wondered about that, and the first person to mention it after yesterday’s post was Erin Stellato (twitter | blog): Before we go on to make sense of all this, let’s look at another example of how SQL Server confusingly mixes the terms ‘Scan’ and ‘Seek’ in different contexts.  The diagram below shows a very simple heap table with two columns, one of which is the non-clustered Primary Key, and the other has a non-unique non-clustered index defined on it.  The right hand side of the diagram shows a simple query, it’s associated query plan, and a couple of extracts from the SSMS tool-tip and Properties windows. Notice the ‘scan direction’ entry in the Properties window snippet.  Is this a seek or a scan?  The different references to Scans and Seeks are even more pronounced in the XML plan output that the graphical plan is based on.  This fragment is what lies behind the single Index Seek icon shown above: You’ll find the same confusing references to Seeks and Scans throughout the product and its documentation. Making Sense of Seeks Let’s forget all about scans for a moment, and think purely about seeks.  Loosely speaking, a seek is the process of navigating an index B-tree to find a particular index record, most often at the leaf level.  A seek starts at the root and navigates down through the levels of the index to find the point of interest: Singleton Lookups The simplest sort of seek predicate performs this traversal to find (at most) a single record.  This is the case when we search for a single value using a unique index and an equality predicate.  It should be readily apparent that this type of search will either find one record, or none at all.  This operation is known as a singleton lookup.  Given the example table from before, the following query is an example of a singleton lookup seek: Sadly, there’s nothing in the graphical plan or XML output to show that this is a singleton lookup – you have to infer it from the fact that this is a single-value equality seek on a unique index.  The other common examples of a singleton lookup are bookmark lookups – both the RID and Key Lookup forms are singleton lookups (an RID lookup finds a single record in a heap from the unique row locator, and a Key Lookup does much the same thing on a clustered table).  If you happen to run your query with STATISTICS IO ON, you will notice that ‘Scan Count’ is always zero for a singleton lookup. Range Scans The other type of seek predicate is a ‘seek plus range scan’, which I will refer to simply as a range scan.  The seek operation makes an initial descent into the index structure to find the first leaf row that qualifies, and then performs a range scan (either backwards or forwards in the index) until it reaches the end of the scan range. The ability of a range scan to proceed in either direction comes about because index pages at the same level are connected by a doubly-linked list – each page has a pointer to the previous page (in logical key order) as well as a pointer to the following page.  The doubly-linked list is represented by the green and red dotted arrows in the index diagram presented earlier.  One subtle (but important) point is that the notion of a ‘forward’ or ‘backward’ scan applies to the logical key order defined when the index was built.  In the present case, the non-clustered primary key index was created as follows: CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col ASC) ) ; Notice that the primary key index specifies an ascending sort order for the single key column.  This means that a forward scan of the index will retrieve keys in ascending order, while a backward scan would retrieve keys in descending key order.  If the index had been created instead on key_col DESC, a forward scan would retrieve keys in descending order, and a backward scan would return keys in ascending order. A range scan seek predicate may have a Start condition, an End condition, or both.  Where one is missing, the scan starts (or ends) at one extreme end of the index, depending on the scan direction.  Some examples might help clarify that: the following diagram shows four queries, each of which performs a single seek against a column holding every integer from 1 to 100 inclusive.  The results from each query are shown in the blue columns, and relevant attributes from the Properties window appear on the right: Query 1 specifies that all key_col values less than 5 should be returned in ascending order.  The query plan achieves this by seeking to the start of the index leaf (there is no explicit starting value) and scanning forward until the End condition (key_col < 5) is no longer satisfied (SQL Server knows it can stop looking as soon as it finds a key_col value that isn’t less than 5 because all later index entries are guaranteed to sort higher). Query 2 asks for key_col values greater than 95, in descending order.  SQL Server returns these results by seeking to the end of the index, and scanning backwards (in descending key order) until it comes across a row that isn’t greater than 95.  Sharp-eyed readers may notice that the end-of-scan condition is shown as a Start range value.  This is a bug in the XML show plan which bubbles up to the Properties window – when a backward scan is performed, the roles of the Start and End values are reversed, but the plan does not reflect that.  Oh well. Query 3 looks for key_col values that are greater than or equal to 10, and less than 15, in ascending order.  This time, SQL Server seeks to the first index record that matches the Start condition (key_col >= 10) and then scans forward through the leaf pages until the End condition (key_col < 15) is no longer met. Query 4 performs much the same sort of operation as Query 3, but requests the output in descending order.  Again, we have to mentally reverse the Start and End conditions because of the bug, but otherwise the process is the same as always: SQL Server finds the highest-sorting record that meets the condition ‘key_col < 25’ and scans backward until ‘key_col >= 20’ is no longer true. One final point to note: seek operations always have the Ordered: True attribute.  This means that the operator always produces rows in a sorted order, either ascending or descending depending on how the index was defined, and whether the scan part of the operation is forward or backward.  You cannot rely on this sort order in your queries of course (you must always specify an ORDER BY clause if order is important) but SQL Server can make use of the sort order internally.  In the four queries above, the query optimizer was able to avoid an explicit Sort operator to honour the ORDER BY clause, for example. Multiple Seek Predicates As we saw yesterday, a single index seek plan operator can contain one or more seek predicates.  These seek predicates can either be all singleton seeks or all range scans – SQL Server does not mix them.  For example, you might expect the following query to contain two seek predicates, a singleton seek to find the single record in the unique index where key_col = 10, and a range scan to find the key_col values between 15 and 20: SELECT key_col FROM dbo.Example WHERE key_col = 10 OR key_col BETWEEN 15 AND 20 ORDER BY key_col ASC ; In fact, SQL Server transforms the singleton seek (key_col = 10) to the equivalent range scan, Start:[key_col >= 10], End:[key_col <= 10].  This allows both range scans to be evaluated by a single seek operator.  To be clear, this query results in two range scans: one from 10 to 10, and one from 15 to 20. Final Thoughts That’s it for today – tomorrow we’ll look at monitoring singleton lookups and range scans, and I’ll show you a seek on a heap table. Yes, a seek.  On a heap.  Not an index! If you would like to run the queries in this post for yourself, there’s a script below.  Thanks for reading! IF OBJECT_ID(N'dbo.Example', N'U') IS NOT NULL BEGIN DROP TABLE dbo.Example; END ; -- Test table is a heap -- Non-clustered primary key on 'key_col' CREATE TABLE dbo.Example ( key_col INTEGER NOT NULL, data INTEGER NOT NULL, CONSTRAINT [PK dbo.Example key_col] PRIMARY KEY NONCLUSTERED (key_col) ) ; -- Non-unique non-clustered index on the 'data' column CREATE NONCLUSTERED INDEX [IX dbo.Example data] ON dbo.Example (data) ; -- Add 100 rows INSERT dbo.Example WITH (TABLOCKX) ( key_col, data ) SELECT key_col = V.number, data = V.number FROM master.dbo.spt_values AS V WHERE V.[type] = N'P' AND V.number BETWEEN 1 AND 100 ; -- ================ -- Singleton lookup -- ================ ; -- Single value equality seek in a unique index -- Scan count = 0 when STATISTIS IO is ON -- Check the XML SHOWPLAN SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 32 ; -- =========== -- Range Scans -- =========== ; -- Query 1 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col <= 5 ORDER BY E.key_col ASC ; -- Query 2 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col > 95 ORDER BY E.key_col DESC ; -- Query 3 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 10 AND E.key_col < 15 ORDER BY E.key_col ASC ; -- Query 4 SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col >= 20 AND E.key_col < 25 ORDER BY E.key_col DESC ; -- Final query (singleton + range = 2 range scans) SELECT E.key_col FROM dbo.Example AS E WHERE E.key_col = 10 OR E.key_col BETWEEN 15 AND 20 ORDER BY E.key_col ASC ; -- === TIDY UP === DROP TABLE dbo.Example; © 2011 Paul White email: [email protected] twitter: @SQL_Kiwi

    Read the article

  • 2.5D game development

    - by ne5tebiu
    2.5D ("two-and-a-half-dimensional"), 3/4 perspective and pseudo-3D are terms used to describe either: graphical projections and techniques which cause a series of images or scenes to fake or appear to be three-dimensional (3D) when in fact they are not, or gameplay in an otherwise three-dimensional video game that is restricted to a two-dimensional plane. (Information taken from Wikipedia.org) I have a question based on 2.5D game development. As stated before, 2.5D uses graphical projections and techniques to make fake 3d or a gameplay restricted to a two-dimensional plane. A good example is a TQ Digital made game: Zero Online (screenshot) the whole map is made of 2d images and only NPCs and players are 3d. The maps were drawn manually by hand without any 3d software rendering. As I'm playing the game I feel like I'm going from a lower part of the map (ground) to a higher one (some metal platform) and it feels like I'm moving in 3 dimensions. But when I look closely, I see that the player size didn't change and the shadow too but I'm still feeling like I'm somehow higher then before (I had rendered a simple map myself that I made in 3dmax but it didn't quite give the result I wanted). How to accomplish such an effect?

    Read the article

  • Unity 3D (with Nvidia driver) becomes very slow and laggy

    - by Graham
    How can I prevent my Unity 3D desktop from becoming slow after a while, given that I have an Nvidia Quadro NVS 290 graphics in TwinView mode? The desktop starts out fast on login, but becomes slow / lagging / hesitant / high latency after a while, symptoms being spikes in CPU usage by /usr/bin/X whenever I cause any graphical activity with the mouse or keyboard (e.g. typing, changing tabs, dragging windows). The desktop remains slow even with all windows (except htop in Terminal) and extraneous processes killed. Detail: Changing tabs in Terminal takes about a second, and X spikes to 76% CPU. As I type into Firefox, X spikes to 95% CPU. Dragging Termiinal window, X goes to 70% CPU. Basically, every graphical action sends CPU usage of X through the roof. Device: Nvidia Quadro NVS 290 Driver package: binary driver nvidia-current-updates (280.13-0ubuntu5) Dual Monitors: Pair of DELL UltraSharp 1908FP in TwinView (X screen 2560x1024) OS: Fresh install of Ubuntu 11.10 amd64 Desktop with all updates. Hardware: Dell Precision T5400 Workstation Pastebin of Xorg.0.log Pastebin of xorg.conf Pastebin of nvidia-xconfig -t output (easier to read than xorg.conf) Output of /usr/lib/nux/unity_support_test -p: To obtain the following htop screenshow I typed "asdf" several times in in this text box, alt-tabbed to Terminal and took a screenshot of the high X CPU usage. This also happens when firefox is not running: Quadro NVS 290 has "No" thermal sensor according to sensors-detect: Next adapter: NVIDIA i2c adapter 0 at 2:00.0 (i2c-0) Do you want to scan it? (YES/no/selectively): Client found at address 0x50 Probing for `Analog Devices ADM1033'... No Probing for `Analog Devices ADM1034'... No P robing for `SPD EEPROM'... No Probing for `EDID EEPROM'... Yes (confidence 8, not a hardware monitoring chip) I tried the nouveau driver by disabling the nvidia-current-updates under Additional Drivers, but Ubuntu and xrandr -q fail to detect the second monitor. This may be issue 737349. Funniest thing is that Nouveau wiki says that XRandR 1.2 dual-monitor is supported so it should work with a second monitor.

    Read the article

  • Grub options are not visible on booting on Samsung ATIV Book 9 Lite running Ubuntu 14.04

    - by mjwittering
    I've managed to install Ubuntu 14.04 on my new Samsung ATIV Book 9 Lite ultrabook. After updating some configuratiosn in the UEFI installation was very easy. The only questions and issue I believe I'm still experience is when booting. I believe when the laptop would be displaying the grub boot options I see the following. There is a black screen with a purple border of 10px around the screen. I'd like to know how I can update my system so that I see the grub boot manager. I've run these commands: sudo cat /etc/default/grub # If you change this file, run 'update-grub' afterwards to update # /boot/grub/grub.cfg. # For full documentation of the options in this file, see: # info -f grub -n 'Simple configuration' GRUB_DEFAULT=0 GRUB_HIDDEN_TIMEOUT=0 GRUB_HIDDEN_TIMEOUT_QUIET=true GRUB_TIMEOUT=10 GRUB_DISTRIBUTOR=`lsb_release -i -s 2> /dev/null || echo Debian` GRUB_CMDLINE_LINUX_DEFAULT="quiet splash" GRUB_CMDLINE_LINUX="" # Uncomment to enable BadRAM filtering, modify to suit your needs # This works with Linux (no patch required) and with any kernel that obtains # the memory map information from GRUB (GNU Mach, kernel of FreeBSD ...) #GRUB_BADRAM="0x01234567,0xfefefefe,0x89abcdef,0xefefefef" # Uncomment to disable graphical terminal (grub-pc only) #GRUB_TERMINAL=console # The resolution used on graphical terminal # note that you can use only modes which your graphic card supports via VBE # you can see them in real GRUB with the command `vbeinfo' #GRUB_GFXMODE=640x480 # Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to Linux #GRUB_DISABLE_LINUX_UUID=true # Uncomment to disable generation of recovery mode menu entries #GRUB_DISABLE_RECOVERY="true" # Uncomment to get a beep at grub start #GRUB_INIT_TUNE="480 440 1" The command was not possible, sudo efibootmgr.

    Read the article

< Previous Page | 57 58 59 60 61 62 63 64 65 66 67 68  | Next Page >