Search Results

Search found 8367 results on 335 pages for 'temporal difference'.

Page 104/335 | < Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >

  • code review: Is it subjective or objective(quantifiable) ?

    - by Ram
    I am putting together some guidelines for code reviews. We do not have one formal process yet, and trying to formalize it. And our team is geographically distributed We are using TFS for source control (used it for tasks/bug tracking/project management as well, but migrated that to JIRA) with VS2008 for development. What are the things you look for when doing a code review ? These are the things I came up with Enforce FXCop rules (we are a Microsoft shop) Check for performance (any tools ?) and security (thinking about using OWASP- code crawler) and thread safety Adhere to naming conventions The code should cover edge cases and boundaries conditions Should handle exceptions correctly (do not swallow exceptions) Check if the functionality is duplicated elsewhere method body should be small(20-30 lines) , and methods should do one thing and one thing only (no side effects/ avoid temporal coupling -) Do not pass/return nulls in methods Avoid dead code Document public and protected methods/properties/variables What other things do you generally look for ? I am trying to see if we can quantify the review process (it would produce identical output when reviewed by different persons) Example: Saying "the method body should be no longer than 20-30 lines of code" as opposed to saying "the method body should be small" Or is code review very subjective ( and would differ from one reviewer to another ) ? The objective is to have a marking system (say -1 point for each FXCop rule violation,-2 points for not following naming conventions,2 point for refactoring etc) so that developers would be more careful when they check in their code.This way, we can identify developers who are consistently writing good/bad code.The goal is to have the reviewer spend about 30 minutes max, to do a review (I know this is subjective, considering the fact that the changeset/revision might include multiple files/huge changes to the existing architecture etc , but you get the general idea, the reviewer should not spend days reviewing someone's code) What other objective/quantifiable system do you follow to identify good/bad code written by developers? Book reference: Clean Code: A handbook of agile software craftmanship by Robert Martin

    Read the article

  • Resources related to data-mining and gaming on social networks

    - by darren
    Hi all I'm interested in the problem of patterning mining among players of social networking games. For example detecting cheaters of a game, given a company's user database. So far I have been following the usual recipe for a data mining project: construct a data warehouse that aggregates significant information select a classifier, and train it with a subsectio of records from the warehouse validate classifier with another test set lather, rinse, repeat Surprisingly, I've found very little in this area regarding literature, best practices, etc. I am hoping to crowdsource the information gathering problem here. Specifically what I'm looking for: What classifiers have worked will for this type of pattern mining (it seems highly temporal, users playing games, users receiving rewards, users transferring prizes etc). Are there any highly agreed upon attributes specific to social networking / gaming data? What is a practical amount of information that should be considered? One problem I've run into is data overload, where queries and data cleansing may take days to complete. Related to point above, what hardware resources are required to produce results? I've found it difficult to estimate the amount of computing power I will require for production use. It has become apparent that a white box in the corner does not have enough horse-power for such a project. Are companies generally resorting to cloud solutions? Are they buying clusters? Basically, any resources (theoretical, academic, or practical) about implementing a social networking / gaming pattern-mining program would be very much appreciated. Thanks.

    Read the article

  • Lazy loading of Blob properties of one class

    - by Khosro
    Hi, My class contains "summary" and "title" properties those are Blob and other properties. Code:(I write some part of class) public class News extends BaseEntity{ @Lob @Basic(fetch = FetchType.LAZY) public String getSummary() { return summary; } @Lob @Basic(fetch = FetchType.LAZY) public String getTitle() { return title; } @Temporal(TemporalType.TIMESTAMP) public Date getPublishDate() { return publishDate; } } I instrument this class to lazy load of Blob properties using this class "org.hibernate.tool.instrument.javassist.InstrumentTask". When i write this code to retrieve only summary of new , newsDAO.findByid(1L).getSummary(); Hibernate generates these queries: Hibernate: select news0_.id as id1_, news0_.entityVersion as entityVe2_1_, news0_.publishDate as publish15_1_, news0_.url as url1_ from News news0_ Hibernate: select news_.summary as summary1_, news_.title as title1_ from News news_ where news_.id=? I have two qurestions: 1.I only want to retrieve "summary" property not "title" property,but Hibernate queries show that it also retrieve "title" property,Why this happens(i want to lazy load of "property")? It seems when i load one of Blob property ,Hibernate loads all of them.Why?(This is my main question) 2.Why Hibernate generates two queries for retrieving only "summary" property of news? Khosro.

    Read the article

  • How to manage a One-To-One and a One-To-Many of same type as unidirectional mapping?

    - by user1652438
    I'm trying to implement a model for private messages between two or more users. That means I've got two Entities: User PrivateMessage The User model shouldn't be edited, so I'm trying to set up an unidirectional relationship: @Entity (name = "User") @Table (name = "user") public class User implements Serializable { @Id String username; String password; // something like this ... } The PrivateMessage Model addresses multiple receivers and has exactly one sender. So I need something like this: @Entity (name = "PrivateMessage") @Table (name = "privateMessage") @XmlRootElement @XmlType (propOrder = {"id", "sender", "receivers", "title", "text", "date", "read"}) public class PrivateMessage implements Serializable { private static final long serialVersionUID = -9126766942868177246L; @Id @GeneratedValue private Long id; @NotNull private String title; @NotNull private String text; @NotNull @Temporal(TemporalType.TIMESTAMP) private Date date; @NotNull private boolean read; @NotNull @ElementCollection(fetch = FetchType.EAGER, targetClass = User.class) private Set<User> receivers; @NotNull @OneToOne private User sender; // and so on } The according 'privateMessage' table won't be generated and just the relationship between the PM and the many receivers is satisfied. I'm confused about this. Everytime I try to set a 'mappedBy' attribute, my IDE marks it as an error. It seems to be a problem that the User-entity isn't aware of the private message which maps it. What am I doing wrong here? I've solved some situation similar to this one, but none of those solutions will work here. Thanks in advance!

    Read the article

  • CSG operations on implicit surfaces with marching cubes

    - by Mads Elvheim
    I render isosurfaces with marching cubes, (or perhaps marching squares as this is 2D) and I want to do set operations like set difference, intersection and union. I thought this was easy to implement, by simply choosing between two vertex scalars from two different implicit surfaces, but it is not. For my initial testing, I tried with two spheres, and the set operation difference. i.e A - B. One sphere is moving and the other one is stationary. Here's the approach I tried when picking vertex scalars and when classifying corner vertices as inside or outside. The code is written in C++. OpenGL is used for rendering, but that's not important. Normal rendering without any CSG operations does give the expected result. void march(const vec2& cmin, //min x and y for the grid cell const vec2& cmax, //max x and y for the grid cell std::vector<vec2>& tri, float iso, float (*cmp1)(const vec2&), //distance from stationary sphere float (*cmp2)(const vec2&) //distance from moving sphere ) { unsigned int squareindex = 0; float scalar[4]; vec2 verts[8]; /* initial setup of the grid cell */ verts[0] = vec2(cmax.x, cmax.y); verts[2] = vec2(cmin.x, cmax.y); verts[4] = vec2(cmin.x, cmin.y); verts[6] = vec2(cmax.x, cmin.y); float s1,s2; /********************************** ********For-loop of interest****** *******Set difference between **** *******two implicit surfaces****** **********************************/ for(int i=0,j=0; i<4; ++i, j+=2){ s1 = cmp1(verts[j]); s2 = cmp2(verts[j]); if((s1 < iso)){ //if inside sphere1 if((s2 < iso)){ //if inside sphere2 scalar[i] = s2; //then set the scalar to the moving sphere } else { scalar[i] = s1; //only inside sphere1 squareindex |= (1<<i); //mark as inside } } else { scalar[i] = s1; //inside neither sphere } } if(squareindex == 0) return; /* Usual interpolation between edge points to compute the new intersection points */ verts[1] = mix(iso, verts[0], verts[2], scalar[0], scalar[1]); verts[3] = mix(iso, verts[2], verts[4], scalar[1], scalar[2]); verts[5] = mix(iso, verts[4], verts[6], scalar[2], scalar[3]); verts[7] = mix(iso, verts[6], verts[0], scalar[3], scalar[0]); for(int i=0; i<10; ++i){ //10 = maxmimum 3 triangles, + one end token int index = triTable[squareindex][i]; //look up our indices for triangulation if(index == -1) break; tri.push_back(verts[index]); } } This gives me weird jaggies: It looks like the CSG operation is done without interpolation. It just "discards" the whole triangle. Do I need to interpolate in some other way, or combine the vertex scalar values? I'd love some help with this. A full testcase can be downloaded HERE

    Read the article

  • Why is an inverse loop faster than a normal loop (test included)

    - by Saif Bechan
    I have been running some small tests in PHP on loops. I do not know if my method is good. I have found that a inverse loop is faster than a normal loop. I have also found that a while-loop is faster than a for-loop. Setup <?php $counter = 10000000; $w=0;$x=0;$y=0;$z=0; $wstart=0;$xstart=0;$ystart=0;$zstart=0; $wend=0;$xend=0;$yend=0;$zend=0; $wstart = microtime(true); for($w=0; $w<$counter; $w++){ echo ''; } $wend = microtime(true); echo "normal for: " . ($wend - $wstart) . "<br />"; $xstart = microtime(true); for($x=$counter; $x>0; $x--){ echo ''; } $xend = microtime(true); echo "inverse for: " . ($xend - $xstart) . "<br />"; echo "<hr> normal - inverse: " . (($wend - $wstart) - ($xend - $xstart)) . "<hr>"; $ystart = microtime(true); $y=0; while($y<$counter){ echo ''; $y++; } $yend = microtime(true); echo "normal while: " . ($yend - $ystart) . "<br />"; $zstart = microtime(true); $z=$counter; while($z>0){ echo ''; $z--; } $zend = microtime(true); echo "inverse while: " . ($zend - $zstart) . "<br />"; echo "<hr> normal - inverse: " . (($yend - $ystart) - ($zend - $zstart)) . "<hr>"; echo "<hr> inverse for - inverse while: " . (($xend - $xstart) - ($zend - $zstart)) . "<hr>"; ?> Average Results The difference in for-loop normal for: 1.0908501148224 inverse for: 1.0212800502777 normal - inverse: 0.069570064544678 The difference in while-loop normal while: 1.0395669937134 inverse while: 0.99321985244751 normal - inverse: 0.046347141265869 The difference in for-loop and while-loop inverse for - inverse while: 0.0280601978302 Questions My question is can someone explain these differences in results? And is my method of benchmarking been correct?

    Read the article

  • Unnecessary Java context switches

    - by Paul Morrison
    I have a network of Java Threads (Flow-Based Programming) communicating via fixed-capacity channels - running under WindowsXP. What we expected, based on our experience with "green" threads (non-preemptive), would be that threads would switch context less often (thus reducing CPU time) if the channels were made bigger. However, we found that increasing channel size does not make any difference to the run time. What seems to be happening is that Java decides to switch threads even though channels aren't full or empty (i.e. even though a thread doesn't have to suspend), which costs CPU time for no apparent advantage. Also changing Thread priorities doesn't make any observable difference. My question is whether there is some way of persuading Java not to make unnecessary context switches, but hold off switching until it is really necessary to switch threads - is there some way of changing Java's dispatching logic? Or is it reacting to something I didn't pay attention to?! Or are there other asynchronism mechanisms, e.g. Thread factories, Runnable(s), maybe even daemons (!). The answer appears to be non-obvious, as so far none of my correspondents has come up with an answer (including most recently two CS profs). Or maybe I'm missing something that's so obvious that people can't imagine my not knowing it... I've added the send and receive code here - not very elegant, but it seems to work...;-) In case you are wondering, I thought the goLock logic in 'send' might be causing the problem, but removing it temporarily didn't make any difference. I have added the code for send and receive... public synchronized Packet receive() { if (isDrained()) { return null; } while (isEmpty()) { try { wait(); } catch (InterruptedException e) { close(); return null; } if (isDrained()) { return null; } } if (isDrained()) { return null; } if (isFull()) { notifyAll(); // notify other components waiting to send } Packet packet = array[receivePtr]; array[receivePtr] = null; receivePtr = (receivePtr + 1) % array.length; //notifyAll(); // only needed if it was full usedSlots--; packet.setOwner(receiver); if (null == packet.getContent()) { traceFuncs("Received null packet"); } else { traceFuncs("Received: " + packet.toString()); } return packet; } synchronized boolean send(final Packet packet, final OutputPort op) { sender = op.sender; if (isClosed()) { return false; } while (isFull()) { try { wait(); } catch (InterruptedException e) { indicateOneSenderClosed(); return false; } sender = op.sender; } if (isClosed()) { return false; } try { receiver.goLock.lockInterruptibly(); } catch (InterruptedException ex) { return false; } try { packet.clearOwner(); array[sendPtr] = packet; sendPtr = (sendPtr + 1) % array.length; usedSlots++; // move this to here if (receiver.getStatus() == StatusValues.DORMANT || receiver.getStatus() == StatusValues.NOT_STARTED) { receiver.activate(); // start or wake up if necessary } else { notifyAll(); // notify receiver // other components waiting to send to this connection may also get // notified, // but this is handled by while statement } sender = null; Component.network.active = true; } finally { receiver.goLock.unlock(); } return true; }

    Read the article

  • Finding users whose birtday is today with JPA

    - by Jeduan Cornejo
    Hi, I have a table with users and are trying to get a list with the people who have birthday today so the app can send an email. The User is defined as @Entity public class User { @Size(max = 30) @NotNull private String name; [...] @Temporal(TemporalType.DATE) @DateTimeFormat(style = "S-") protected Date birthday; } and I've got a method which returns the people which were born today like so @SuppressWarnings("unchecked") public static List<User> findUsersWithBirthday() { List<User> users = entityManager().createQuery("select u from User u where u.birthday = :date") .setParameter("date", new Date(), TemporalType.DATE) .getResultList(); return users; } This is fine and all for finding people which were born today, however tha's not really that useful, so I've been struggling for a way to find all the users that were born today, independent of the year. Using MySQL there's functions I can use like select month(curdate()) as month, dayofmonth(curdate()) as day However I've been struggling to find a JPA equivalent to that. I'm using Spring 3.0.1 and Hibernate 3.3.2

    Read the article

  • translate a PHP $string using google translator API

    - by Toni Michel Caubet
    hey there! been google'ing for a while how is the best way to translate with google translator in PHP, found very different ways converting URLS, or using Js but i want to do it only with php (or with a very simple solution JS/JQUery) example: //hopefully with $from_lan and $to_lan being like 'en','de', .. or similar function translate($from_lan, $to_lan, $text){ // do return $translated_text; } can you give me a clue? or maybe you already have this function.. my intention it's to use it only for the languages i have not already defined (or keys i haven't defined), that's why i wan it so simple, will be only temporal.. EDIT thanks for your replies we are now trying this soulutions: function auto_translate($from_lan, $to_lan, $text){ // do $json = json_decode(file_get_contents('https://ajax.googleapis.com/ajax/services/language/translate?v=1.0&q=' . urlencode($text) . '&langpair=' . $from_lan . '|' . $to_lan)); $translated_text = $json->responseData->translatedText; return $translated_text; } (there was a extra 'g' on variables for lang... anyway) it returns: works now :) i don't really understand much the function, so any idea why is not acepting the object? (now i do) OR: unction auto_translate($from_lan, $to_lan, $text){ // do // $json = json_decode(file_get_contents('https://ajax.googleapis.com/ajax/services/language/translate?v=1.0&q=' . urlencode($text) . '&langpair=' . $from_lan . '|' . $to_lan)); // $translated_text = $json['responseData']['translatedText']; error_reporting(1); require_once('GTranslate.php'); try{ $gt = new Gtranslate(); $translated_text = $gt-english_to_german($text); } catch (GTranslateException $ge) { $translated_text= $ge->getMessage(); } return $translated_text; } And this one looks great but it doesn't even gives me an error, the page won't load (error_report(1) :S) thanks in advance!

    Read the article

  • Storing a jpa entity where only the timestamp changes results in updates rather than inserts (desire

    - by David Schlenk
    I have a JPA entity that stores a fk id, a boolean and a timestamp: @Entity public class ChannelInUse implements Serializable { @Id @GeneratedValue private Long id; @ManyToOne @JoinColumn(nullable = false) private Channel channel; private boolean inUse = false; @Temporal(TemporalType.TIMESTAMP) private Date inUseAt = new Date(); ... } I want every new instance of this entity to result in a new row in the table. For whatever reason no matter what I do it always results in the row getting updated with a new timestamp value rather than creating a new row. Even tried to just use a native query to run an insert but channel's ID wasn't populated yet so I gave up on that. I've tried using an embedded id class consisting of channel.getId and inUseAt. My equals and hashcode for are: public boolean equals(Object obj){ if(this == obj) return true; if(!(obj instanceof ChannelInUse)) return false; ChannelInUse ciu = (ChannelInUse) obj; return ( (this.inUseAt == null ? ciu.inUseAt == null : this.inUseAt.equals(ciu.inUseAt)) && (this.inUse == ciu.inUse) && (this.channel == null ? ciu.channel == null : this.channel.equals(ciu.channel)) ); } /** * hashcode generated from at, channel and inUse properties. */ public int hashCode(){ int hash = 1; hash = hash * 31 + (this.inUseAt == null ? 0 : this.inUseAt.hashCode()); hash = hash * 31 + (this.channel == null ? 0 : this.channel.hashCode()); if(inUse) hash = hash * 31 + 1; else hash = hash * 31 + 0; return hash; } } I've tried using hibernate's Entity annotation with mutable=false. I'm probably just not understanding what makes an entity unique or something. Hit the google pretty hard but can't figure this one out.

    Read the article

  • Struts2 Hibernate Login with User table and group table

    - by J2ME NewBiew
    My problem is, i have a table User and Table Group (this table use to authorization for user - it mean when user belong to a group like admin, they can login into admincp and other user belong to group member, they just only read and write and can not login into admincp) each user maybe belong to many groups and each group has been contain many users and they have relationship are many to many I use hibernate for persistence storage. and struts 2 to handle business logic. When i want to implement login action from Struts2 how can i get value of group member belong to ? to compare with value i want to know? Example I get user from username and password then get group from user class but i dont know how to get value of group user belong to it mean if user belong to Groupid is 1 and in group table , at column adminpermission is 1, that user can login into admincp, otherwise he can't my code: User.java /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package org.dejavu.software.model; import java.io.Serializable; import java.util.Date; import java.util.HashSet; import java.util.Set; import javax.persistence.CascadeType; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.FetchType; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.JoinTable; import javax.persistence.ManyToMany; import javax.persistence.Table; import javax.persistence.Temporal; /** * * @author Administrator */ @Entity @Table(name="User") public class User implements Serializable{ private static final long serialVersionUID = 2575677114183358003L; private Long userId; private String username; private String password; private String email; private Date DOB; private String address; private String city; private String country; private String avatar; private Set<Group> groups = new HashSet<Group>(0); @Column(name="dob") @Temporal(javax.persistence.TemporalType.DATE) public Date getDOB() { return DOB; } public void setDOB(Date DOB) { this.DOB = DOB; } @Column(name="address") public String getAddress() { return address; } public void setAddress(String address) { this.address = address; } @Column(name="city") public String getCity() { return city; } public void setCity(String city) { this.city = city; } @Column(name="country") public String getCountry() { return country; } public void setCountry(String country) { this.country = country; } @Column(name="email") public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } @ManyToMany(fetch = FetchType.LAZY, cascade = CascadeType.ALL) @JoinTable(name="usergroup",joinColumns={@JoinColumn(name="userid")},inverseJoinColumns={@JoinColumn( name="groupid")}) public Set<Group> getGroups() { return groups; } public void setGroups(Set<Group> groups) { this.groups = groups; } @Column(name="password") public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } @Id @GeneratedValue @Column(name="iduser") public Long getUserId() { return userId; } public void setUserId(Long userId) { this.userId = userId; } @Column(name="username") public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } @Column(name="avatar") public String getAvatar() { return avatar; } public void setAvatar(String avatar) { this.avatar = avatar; } } Group.java /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package org.dejavu.software.model; import java.io.Serializable; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.GeneratedValue; import javax.persistence.Id; import javax.persistence.Table; /** * * @author Administrator */ @Entity @Table(name="Group") public class Group implements Serializable{ private static final long serialVersionUID = -2722005617166945195L; private Long idgroup; private String groupname; private String adminpermission; private String editpermission; private String modpermission; @Column(name="adminpermission") public String getAdminpermission() { return adminpermission; } public void setAdminpermission(String adminpermission) { this.adminpermission = adminpermission; } @Column(name="editpermission") public String getEditpermission() { return editpermission; } public void setEditpermission(String editpermission) { this.editpermission = editpermission; } @Column(name="groupname") public String getGroupname() { return groupname; } public void setGroupname(String groupname) { this.groupname = groupname; } @Id @GeneratedValue @Column (name="idgroup") public Long getIdgroup() { return idgroup; } public void setIdgroup(Long idgroup) { this.idgroup = idgroup; } @Column(name="modpermission") public String getModpermission() { return modpermission; } public void setModpermission(String modpermission) { this.modpermission = modpermission; } } UserDAO /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package org.dejavu.software.dao; import java.util.List; import org.dejavu.software.model.User; import org.dejavu.software.util.HibernateUtil; import org.hibernate.Query; import org.hibernate.Session; /** * * @author Administrator */ public class UserDAO extends HibernateUtil{ public User addUser(User user){ Session session = HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); session.save(user); session.getTransaction().commit(); return user; } public List<User> getAllUser(){ Session session = HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); List<User> user = null; try { user = session.createQuery("from User").list(); } catch (Exception e) { e.printStackTrace(); session.getTransaction().rollback(); } session.getTransaction().commit(); return user; } public User checkUsernamePassword(String username, String password){ Session session = HibernateUtil.getSessionFactory().getCurrentSession(); session.beginTransaction(); User user = null; try { Query query = session.createQuery("from User where username = :name and password = :password"); query.setString("username", username); query.setString("password", password); user = (User) query.uniqueResult(); } catch (Exception e) { e.printStackTrace(); session.getTransaction().rollback(); } session.getTransaction().commit(); return user; } } AdminLoginAction /* * To change this template, choose Tools | Templates * and open the template in the editor. */ package org.dejavu.software.view; import com.opensymphony.xwork2.ActionSupport; import org.dejavu.software.dao.UserDAO; import org.dejavu.software.model.User; /** * * @author Administrator */ public class AdminLoginAction extends ActionSupport{ private User user; private String username,password; private String role; private UserDAO userDAO; public AdminLoginAction(){ userDAO = new UserDAO(); } @Override public String execute(){ return SUCCESS; } @Override public void validate(){ if(getUsername().length() == 0){ addFieldError("username", "Username is required"); }if(getPassword().length()==0){ addFieldError("password", getText("Password is required")); } } public String getPassword() { return password; } public void setPassword(String password) { this.password = password; } public String getRole() { return role; } public void setRole(String role) { this.role = role; } public User getUser() { return user; } public void setUser(User user) { this.user = user; } public String getUsername() { return username; } public void setUsername(String username) { this.username = username; } } other question. i saw some example about Login, i saw some developers use interceptor, im cant understand why they use it, and what benefit "Interceptor" will be taken for us? Thank You Very Much!

    Read the article

  • Layout: how to make image to change its width and height proportionally?

    - by Exterminator13
    I have such layout: <?xml version="1.0" encoding="utf-8"?> <RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" xmlns:tools="http://schemas.android.com/tools" android:layout_width="wrap_content" android:layout_height="wrap_content" android:orientation="horizontal"> <TextView android:id="@+id/title" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentLeft="true" android:layout_centerVertical="true" android:layout_toLeftOf="@+id/my_image" android:ellipsize="end" android:singleLine="true" android:text="Some text" android:textAppearance="?android:attr/textAppearanceMedium" /> <ImageView android:id="@+id/my_image" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignTop="@+id/title" android:layout_alignBottom="@+id/title" android:layout_alignParentRight="true" android:layout_centerVertical="true" android:adjustViewBounds="true" android:src="@drawable/my_bitmap_image" /> This layout does almost what I need: it makes image view height the same as text view. The image graphic contents stretched also keeping aspect ratio. But, the width of the image view does not change! As a result, I have a wide gap between text and the image view! As a temporal solution, I override View#onLayout. The question: how to change image width in xml layout? UPDATE: This is a final layout I need (text + a few images). Look at the first image: its width should be exactly the same as scaled image in it with no paddings and margins:

    Read the article

  • how to find by date from timestamp column in JPA criteria

    - by Kre Toni
    I want to find a record by date. In entity and database table datatype is timestamp. I used Oracle database. @Entity public class Request implements Serializable { @Id private String id; @Version private long version; @Temporal(TemporalType.TIMESTAMP) @Column(name = "CREATION_DATE") private Date creationDate; public Request() { } public Request(String id, Date creationDate) { setId(id); setCreationDate(creationDate); } public String getId() { return id; } public void setId(String id) { this.id = id; } public long getVersion() { return version; } public void setVersion(long version) { this.version = version; } public Date getCreationDate() { return creationDate; } public void setCreationDate(Date creationDate) { this.creationDate = creationDate; } } in mian method public static void main(String[] args) { RequestTestCase requestTestCase = new RequestTestCase(); EntityManager em = Persistence.createEntityManagerFactory("Criteria").createEntityManager(); em.getTransaction().begin(); em.persist(new Request("005",new Date())); em.getTransaction().commit(); Query q = em.createQuery("SELECT r FROM Request r WHERE r.creationDate = :creationDate",Request.class); q.setParameter("creationDate",new GregorianCalendar(2012,12,5).getTime()); Request r = (Request)q.getSingleResult(); System.out.println(r.getCreationDate()); } in oracle database record is ID CREATION_DATE VERSION 006 05-DEC-12 05.34.39.200000 PM 1 Exception is Exception in thread "main" javax.persistence.NoResultException: getSingleResult() did not retrieve any entities. at org.eclipse.persistence.internal.jpa.EJBQueryImpl.throwNoResultException(EJBQueryImpl.java:1246) at org.eclipse.persistence.internal.jpa.EJBQueryImpl.getSingleResult(EJBQueryImpl.java:750) at com.ktrsn.RequestTestCase.main(RequestTestCase.java:29)

    Read the article

  • How to synchronize two (or n) replication processes for SQL Server databases?

    - by Yauheni Sivukha
    There are two master databases and two read-only copies updated by standard transactional replication. It is needed to map some entity from both read-only databases, lets say that A databases contains orders and B databases contains lines. The problem is that replication to one database can lag behind replication of second database, and at the moment of mapping R-databases will have inconsistent data. For example. We stored 2 orders with lines at 19:00 and 19:03. Mapping process started at 19:05, but to the moment of mapping A database replication processed all changes up to 19:03, but B database replication processed only changes up to 19:00. After mapping we will have order entity with order as of 19:03 and lines as of 19:00. The troubles are guaranteed:) In my particular case both databases have temporal model, so it is possible to fetch data for every time slice, but the problem is to identify time of latest replication. Question: How to synchronize replication processes for several databases to avoid situation described above? Or, in other words, how to compare last time of replication in each database? UPD: The only way I see to synchronize is to continuously write timestamps into service tables in each database and to check these timestamps on replicated servers. Is that acceptable solution?

    Read the article

  • Parallelism in .NET – Part 7, Some Differences between PLINQ and LINQ to Objects

    - by Reed
    In my previous post on Declarative Data Parallelism, I mentioned that PLINQ extends LINQ to Objects to support parallel operations.  Although nearly all of the same operations are supported, there are some differences between PLINQ and LINQ to Objects.  By introducing Parallelism to our declarative model, we add some extra complexity.  This, in turn, adds some extra requirements that must be addressed. In order to illustrate the main differences, and why they exist, let’s begin by discussing some differences in how the two technologies operate, and look at the underlying types involved in LINQ to Objects and PLINQ . LINQ to Objects is mainly built upon a single class: Enumerable.  The Enumerable class is a static class that defines a large set of extension methods, nearly all of which work upon an IEnumerable<T>.  Many of these methods return a new IEnumerable<T>, allowing the methods to be chained together into a fluent style interface.  This is what allows us to write statements that chain together, and lead to the nice declarative programming model of LINQ: double min = collection .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } Other LINQ variants work in a similar fashion.  For example, most data-oriented LINQ providers are built upon an implementation of IQueryable<T>, which allows the database provider to turn a LINQ statement into an underlying SQL query, to be performed directly on the remote database. PLINQ is similar, but instead of being built upon the Enumerable class, most of PLINQ is built upon a new static class: ParallelEnumerable.  When using PLINQ, you typically begin with any collection which implements IEnumerable<T>, and convert it to a new type using an extension method defined on ParallelEnumerable: AsParallel().  This method takes any IEnumerable<T>, and converts it into a ParallelQuery<T>, the core class for PLINQ.  There is a similar ParallelQuery class for working with non-generic IEnumerable implementations. This brings us to our first subtle, but important difference between PLINQ and LINQ – PLINQ always works upon specific types, which must be explicitly created. Typically, the type you’ll use with PLINQ is ParallelQuery<T>, but it can sometimes be a ParallelQuery or an OrderedParallelQuery<T>.  Instead of dealing with an interface, implemented by an unknown class, we’re dealing with a specific class type.  This works seamlessly from a usage standpoint – ParallelQuery<T> implements IEnumerable<T>, so you can always “switch back” to an IEnumerable<T>.  The difference only arises at the beginning of our parallelization.  When we’re using LINQ, and we want to process a normal collection via PLINQ, we need to explicitly convert the collection into a ParallelQuery<T> by calling AsParallel().  There is an important consideration here – AsParallel() does not need to be called on your specific collection, but rather any IEnumerable<T>.  This allows you to place it anywhere in the chain of methods involved in a LINQ statement, not just at the beginning.  This can be useful if you have an operation which will not parallelize well or is not thread safe.  For example, the following is perfectly valid, and similar to our previous examples: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); However, if SomeOperation() is not thread safe, we could just as easily do: double min = collection .Select(item => item.SomeOperation()) .AsParallel() .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .Min(item => item.PerformComputation()); In this case, we’re using standard LINQ to Objects for the Select(…) method, then converting the results of that map routine to a ParallelQuery<T>, and processing our filter (the Where method) and our aggregation (the Min method) in parallel. PLINQ also provides us with a way to convert a ParallelQuery<T> back into a standard IEnumerable<T>, forcing sequential processing via standard LINQ to Objects.  If SomeOperation() was thread-safe, but PerformComputation() was not thread-safe, we would need to handle this by using the AsEnumerable() method: double min = collection .AsParallel() .Select(item => item.SomeOperation()) .Where(item => item.SomeProperty > 6 && item.SomeProperty < 24) .AsEnumerable() .Min(item => item.PerformComputation()); Here, we’re converting our collection into a ParallelQuery<T>, doing our map operation (the Select(…) method) and our filtering in parallel, then converting the collection back into a standard IEnumerable<T>, which causes our aggregation via Min() to be performed sequentially. This could also be written as two statements, as well, which would allow us to use the language integrated syntax for the first portion: var tempCollection = from item in collection.AsParallel() let e = item.SomeOperation() where (e.SomeProperty > 6 && e.SomeProperty < 24) select e; double min = tempCollection.AsEnumerable().Min(item => item.PerformComputation()); This allows us to use the standard LINQ style language integrated query syntax, but control whether it’s performed in parallel or serial by adding AsParallel() and AsEnumerable() appropriately. The second important difference between PLINQ and LINQ deals with order preservation.  PLINQ, by default, does not preserve the order of of source collection. This is by design.  In order to process a collection in parallel, the system needs to naturally deal with multiple elements at the same time.  Maintaining the original ordering of the sequence adds overhead, which is, in many cases, unnecessary.  Therefore, by default, the system is allowed to completely change the order of your sequence during processing.  If you are doing a standard query operation, this is usually not an issue.  However, there are times when keeping a specific ordering in place is important.  If this is required, you can explicitly request the ordering be preserved throughout all operations done on a ParallelQuery<T> by using the AsOrdered() extension method.  This will cause our sequence ordering to be preserved. For example, suppose we wanted to take a collection, perform an expensive operation which converts it to a new type, and display the first 100 elements.  In LINQ to Objects, our code might look something like: // Using IEnumerable<SourceClass> collection IEnumerable<ResultClass> results = collection .Select(e => e.CreateResult()) .Take(100); If we just converted this to a parallel query naively, like so: IEnumerable<ResultClass> results = collection .AsParallel() .Select(e => e.CreateResult()) .Take(100); We could very easily get a very different, and non-reproducable, set of results, since the ordering of elements in the input collection is not preserved.  To get the same results as our original query, we need to use: IEnumerable<ResultClass> results = collection .AsParallel() .AsOrdered() .Select(e => e.CreateResult()) .Take(100); This requests that PLINQ process our sequence in a way that verifies that our resulting collection is ordered as if it were processed serially.  This will cause our query to run slower, since there is overhead involved in maintaining the ordering.  However, in this case, it is required, since the ordering is required for correctness. PLINQ is incredibly useful.  It allows us to easily take nearly any LINQ to Objects query and run it in parallel, using the same methods and syntax we’ve used previously.  There are some important differences in operation that must be considered, however – it is not a free pass to parallelize everything.  When using PLINQ in order to parallelize your routines declaratively, the same guideline I mentioned before still applies: Parallelization is something that should be handled with care and forethought, added by design, and not just introduced casually.

    Read the article

  • Entity Association Mapping with Code First Part 1 : Mapping Complex Types

    - by mortezam
    Last week the CTP5 build of the new Entity Framework Code First has been released by data team at Microsoft. Entity Framework Code-First provides a pretty powerful code-centric way to work with the databases. When it comes to associations, it brings ultimate flexibility. I’m a big fan of the EF Code First approach and am planning to explain association mapping with code first in a series of blog posts and this one is dedicated to Complex Types. If you are new to Code First approach, you can find a great walkthrough here. In order to build a solid foundation for our discussion, we will start by learning about some of the core concepts around the relationship mapping.   What is Mapping?Mapping is the act of determining how objects and their relationships are persisted in permanent data storage, in our case, relational databases. What is Relationship mapping?A mapping that describes how to persist a relationship (association, aggregation, or composition) between two or more objects. Types of RelationshipsThere are two categories of object relationships that we need to be concerned with when mapping associations. The first category is based on multiplicity and it includes three types: One-to-one relationships: This is a relationship where the maximums of each of its multiplicities is one. One-to-many relationships: Also known as a many-to-one relationship, this occurs when the maximum of one multiplicity is one and the other is greater than one. Many-to-many relationships: This is a relationship where the maximum of both multiplicities is greater than one. The second category is based on directionality and it contains two types: Uni-directional relationships: when an object knows about the object(s) it is related to but the other object(s) do not know of the original object. To put this in EF terminology, when a navigation property exists only on one of the association ends and not on the both. Bi-directional relationships: When the objects on both end of the relationship know of each other (i.e. a navigation property defined on both ends). How Object Relationships Are Implemented in POCO domain models?When the multiplicity is one (e.g. 0..1 or 1) the relationship is implemented by defining a navigation property that reference the other object (e.g. an Address property on User class). When the multiplicity is many (e.g. 0..*, 1..*) the relationship is implemented via an ICollection of the type of other object. How Relational Database Relationships Are Implemented? Relationships in relational databases are maintained through the use of Foreign Keys. A foreign key is a data attribute(s) that appears in one table and must be the primary key or other candidate key in another table. With a one-to-one relationship the foreign key needs to be implemented by one of the tables. To implement a one-to-many relationship we implement a foreign key from the “one table” to the “many table”. We could also choose to implement a one-to-many relationship via an associative table (aka Join table), effectively making it a many-to-many relationship. Introducing the ModelNow, let's review the model that we are going to use in order to implement Complex Type with Code First. It's a simple object model which consist of two classes: User and Address. Each user could have one billing address. The Address information of a User is modeled as a separate class as you can see in the UML model below: In object-modeling terms, this association is a kind of aggregation—a part-of relationship. Aggregation is a strong form of association; it has some additional semantics with regard to the lifecycle of objects. In this case, we have an even stronger form, composition, where the lifecycle of the part is fully dependent upon the lifecycle of the whole. Fine-grained domain models The motivation behind this design was to achieve Fine-grained domain models. In crude terms, fine-grained means “more classes than tables”. For example, a user may have both a billing address and a home address. In the database, you may have a single User table with the columns BillingStreet, BillingCity, and BillingPostalCode along with HomeStreet, HomeCity, and HomePostalCode. There are good reasons to use this somewhat denormalized relational model (performance, for one). In our object model, we can use the same approach, representing the two addresses as six string-valued properties of the User class. But it’s much better to model this using an Address class, where User has the BillingAddress and HomeAddress properties. This object model achieves improved cohesion and greater code reuse and is more understandable. Complex Types: Splitting a Table Across Multiple Types Back to our model, there is no difference between this composition and other weaker styles of association when it comes to the actual C# implementation. But in the context of ORM, there is a big difference: A composed class is often a candidate Complex Type. But C# has no concept of composition—a class or property can’t be marked as a composition. The only difference is the object identifier: a complex type has no individual identity (i.e. no AddressId defined on Address class) which make sense because when it comes to the database everything is going to be saved into one single table. How to implement a Complex Types with Code First Code First has a concept of Complex Type Discovery that works based on a set of Conventions. The convention is that if Code First discovers a class where a primary key cannot be inferred, and no primary key is registered through Data Annotations or the fluent API, then the type will be automatically registered as a complex type. Complex type detection also requires that the type does not have properties that reference entity types (i.e. all the properties must be scalar types) and is not referenced from a collection property on another type. Here is the implementation: public class User{    public int UserId { get; set; }    public string FirstName { get; set; }    public string LastName { get; set; }    public string Username { get; set; }    public Address Address { get; set; }} public class Address {     public string Street { get; set; }     public string City { get; set; }            public string PostalCode { get; set; }        }public class EntityMappingContext : DbContext {     public DbSet<User> Users { get; set; }        } With code first, this is all of the code we need to write to create a complex type, we do not need to configure any additional database schema mapping information through Data Annotations or the fluent API. Database SchemaThe mapping result for this object model is as follows: Limitations of this mappingThere are two important limitations to classes mapped as Complex Types: Shared references is not possible: The Address Complex Type doesn’t have its own database identity (primary key) and so can’t be referred to by any object other than the containing instance of User (e.g. a Shipping class that also needs to reference the same User Address). No elegant way to represent a null reference There is no elegant way to represent a null reference to an Address. When reading from database, EF Code First always initialize Address object even if values in all mapped columns of the complex type are null. This means that if you store a complex type object with all null property values, EF Code First returns a initialized complex type when the owning entity object is retrieved from the database. SummaryIn this post we learned about fine-grained domain models which complex type is just one example of it. Fine-grained is fully supported by EF Code First and is known as the most important requirement for a rich domain model. Complex type is usually the simplest way to represent one-to-one relationships and because the lifecycle is almost always dependent in such a case, it’s either an aggregation or a composition in UML. In the next posts we will revisit the same domain model and will learn about other ways to map a one-to-one association that does not have the limitations of the complex types. References ADO.NET team blog Mapping Objects to Relational Databases Java Persistence with Hibernate

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #033

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Spatial Database Definition and Research Documents Here is the definition from Wikipedia about spatial database : A spatial database is a database that is optimized to store and query data related to objects in space, including points, lines and polygons. While typical databases can understand various numeric and character types of data, additional functionality needs to be added for databases to process spatial data types. Select Only Date Part From DateTime – Best Practice A very common question which I receive is how to only get Date or Time part from datetime value. In this blog post I explain the same in very simple words. T-SQL Paging Query Technique Comparison (OVER and ROW_NUMBER()) – CTE vs. Derived Table I have received few emails and comments about my post SQL SERVER – T-SQL Paging Query Technique Comparison – SQL 2000 vs SQL 2005. The main question was is this can be done using CTE? Absolutely! What about Performance? It is identical! Please refer above mentioned article for the history of paging. SQL SERVER – Cannot resolve collation conflict for equal to operation One of the very first error I ever encountered in my career was to resolve this conflict. I have blogged about it and I have realized that many others like me who are facing this error. LEN and DATALENGTH of NULL Simple Example Here is the question for you what is the LEN of NULL value? Well it is very easy – just read the blog. Recovery Models and Selection Very simple and easy explanation of the Database Backup Recovery Model and how to select the best option for you. Explanation SQL SERVER Hash Join Hash join gives best performance when two more join tables are joined and at-least one of them have no index or is not sorted. It is also expected that smaller of the either of table can be read in memory completely (though not necessary). Easy Sequence of SELECT FROM JOIN WHERE GROUP BY HAVING ORDER BY SELECT yourcolumns FROM tablenames JOIN tablenames WHERE condition GROUP BY yourcolumns HAVING aggregatecolumn condition ORDER BY yourcolumns NorthWind Database or AdventureWorks Database – Samples Databases In this blog post we learn how to install Northwind database. I also shared the source where one can download this database as that is used in many examples on MSDN help files. sp_HelpText for sp_HelpText – Puzzle A simple quick puzzle – do you know the answer of it? If not, go ahead and read the blog. 2008 SQL SERVER – 2008 – Step By Step Installation Guide With Images When SQL Server 2008 was newly introduced lots of people had no clue how to install SQL Server 2008 and the amount of the question which I used to receive were so much. I wrote this blog post with the spirit that this will help all the newbies to install SQL Server 2008 with the help of images. Still today this blog post has been bible for all of the people who are confused with SQL Server installation. Inline Variable Assignment I loved this feature. I have always wanted this feature to be present in SQL Server. The last time when I met developers from Microsoft SQL Server, I had talked about this feature. I think this feature saves some time but make the code more readable. Introduction to Policy Management – Enforcing Rules on SQL Server If our company policy is to create all the Stored Procedure with prefix ‘usp’ that developers should be just prevented to create Stored Procedure with any other prefix. Let us see a small tutorial how to create conditions and policy which will prevent any future SP to be created with any other prefix. 2009 Performance Counters from System Views – By Kevin Mckenna Many of you are not aware of this fact that access to performance information is readily available in SQL Server and that too without querying performance counters using a custom application or via perfmon. Till now, this fact has remained undisclosed but through this post I would like to explain you can easily access SQL Server performance counter information. Without putting much effort you will come across the system viewsys.dm_os_performance_counters. As the name suggests, this provides you easy access to the SQL Server performance counter information that is passed on to perfmon, but you can get at it via tsql. Customize Toolbar – Remove Debug Button from Toolbar I was fond of SQL Server Debugger feature in SQL Server 2000. To my utter disappointment, this feature was withdrawn from SQL Server 2005. The button of the debugger is similar to a play button and is used to run debugging commands of Visual Studio. Because of this reason, it gets very much infuriating for developers when they are developing on both – Visual Studio and SSMS. Let us now see how we can remove debugging button from SQL Server Management Studio. Effect of Normalization on Index and Performance A very interesting conversation which started from twitter. If you want to read one link this is the link I encourage you to read it. SSMS Feature – Multi-server Queries Using SQL Server Management Studio (SSMS) DBAs can now query multiple servers from one window. It is quite common for DBAs with large amount of servers to maintain and gather information from multiple SQL Servers and create report. This feature is a blessing for the DBAs, as they can now assemble all the information instantaneously without going anywhere. Query Optimizer Hint ROBUST PLAN – Question to You “ROBUST PLAN” is a kind of query hint which works quite differently than other hints. It does not improve join or force any indexes to use; it just makes sure that a query does not crash due to over the limit size of row. Let me elaborate upon it in the blog post. 2010 Do you really know the difference between various date functions available in SQL Server 2012? Here is a three part story where we explored the same with examples: Fastest Way to Restore the Database Difference Between DATETIME and DATETIME2 Difference Between DATETIME and DATETIME2 – WITH GETDATE Shrinking NDF and MDF Files – Readers’ Opinion Shrinking Database always creates performance degradation and increases fragmentation in the database. I suggest that you keep that in mind before you start reading the following comment. If you are going to say Shrinking Database is bad and evil, here I am saying it first and loud. Now, the comment of Imran is written while keeping in mind only the process showing how the Shrinking Database Operation works. Imran has already explained his understanding and requests further explanation. I have removed the Best Practices section from Imran’s comments, as there are a few corrections. 2011 Solution – Puzzle – SELECT * vs SELECT COUNT(*) This is very interesting question and I am very confident that not every one knows the answer to this question. Let me ask you again – Which will be faster SELECT* or SELECT COUNT (*) or do you think this is apples and oranges comparison. 2012 Service Broker and CAP_CPU_PERCENT – Limiting SQL Server Instances to CPU Usage In SQL Server 2012 there are a few enhancements with regards to SQL Server Resource Governor. One of the enhancement is how the resources are allocated. Let me explain you with examples. Let us understand the entire discussion with the help of three different examples. Finding Size of a Columnstore Index Using DMVs One of the very common question I often see is need of the list of columnstore index along with their size and corresponding table name. I quickly re-wrote a script using DMVs sys.indexes and sys.dm_db_partition_stats. This script gives the size of the columnstore index on disk only. I am sure there will be advanced script to retrieve details related to components associated with the columnstore index. However, I believe following script is sufficient to start getting an idea of columnstore index size. Developer Training Resources and Summary Roundup Developer Training - Importance and Significance - Part 1 In this part we discussed the importance of training in the real world. The most important and valuable resource any company is its employee. Employees who have been well-trained will be better at their jobs and produce a better product.  An employee who is well trained obviously knows more about their job and all the technical aspects. I have a very high opinion about training employees and it is the most important task. Developer Training – Employee Morals and Ethics – Part 2 In this part we discussed the most crucial components of training. Often employees are expecting the company to pay for their training and the company expresses no interest in training the employee. Quite often training expenses are the real issue for both the employee and employer. Developer Training – Difficult Questions and Alternative Perspective - Part 3 This part was the most difficult to write as I tried to address a few difficult questions and answers. Training is such a sensitive issue that many developers when not receiving chance for training think about leaving the organization. Developer Training – Various Options for Developer Training – Part 4 In this part I tried to explore a few methods and options for training. The generic feedback I received on this blog post was short and I should have explored each of the subject of the training in details. I believe there are two big buckets of training 1) Instructor Lead Training and 2) Self Lead Training. Developer Training – A Conclusive Summary- Part 5 There is no better motivation than a personal desire to learn new technology. Honestly there is nothing more personal learning. That “change is the only constant” and “adapt & overcome” are the essential lessons of life. One cannot stop the learning and resist the change. In the IT industry “ego of knowing all” and the “resistance to change” are the most challenging issues. A Quick Look at Logging and Ideas around Logging Question: What is the first thing comes to your mind when you hear the word “Logging”? Strange enough I got a different answer every single time. Let me just list what answer I got from my friends. Let us go over them one by one. Beginning Performance Tuning with SQL Server Execution Plan Solution of Puzzle – Swap Value of Column Without Case Statement Earlier this week I asked a question where I asked how to Swap Values of the column without using CASE Statement. Read here: SQL SERVER – A Puzzle – Swap Value of Column Without Case Statement. I have proposed 3 different solutions in the blog posts itself. I had requested the help of the community to come up with alternate solutions and honestly I am stunned and amazed by the qualified entries. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Enhancements for InnoDB Memcached

    - by Calvin Sun
    In MySQL 5.6, we continued our development on InnoDB Memcached and completed a few widely desirable features that make InnoDB Memcached a competitive feature in more scenario. Notablely, they are 1) Support multiple table mapping 2) Added background thread to auto-commit long running transactions 3) Enhancement in binlog performance  Let’s go over each of these features one by one. And in the last section, we will go over a couple of internally performed performance tests. Support multiple table mapping In our earlier release, all InnoDB Memcached operations are mapped to a single InnoDB table. In the real life, user might want to use this InnoDB Memcached features on different tables. Thus being able to support access to different table at run time, and having different mapping for different connections becomes a very desirable feature. And in this GA release, we allow user just be able to do both. We will discuss the key concepts and key steps in using this feature. 1) "mapping name" in the "get" and "set" command In order to allow InnoDB Memcached map to a new table, the user (DBA) would still require to "pre-register" table(s) in InnoDB Memcached “containers” table (there is security consideration for this requirement). If you would like to know about “containers” table, please refer to my earlier blogs in blogs.innodb.com. Once registered, the InnoDB Memcached will then be able to look for such table when they are referred. Each of such registered table will have a unique "registration name" (or mapping_name) corresponding to the “name” field in the “containers” table.. To access these tables, user will include such "registration name" in their get or set commands, in the form of "get @@new_mapping_name.key", prefix "@@" is required for signaling a mapped table change. The key and the "mapping name" are separated by a configurable delimiter, by default, it is ".". So the syntax is: get [@@mapping_name.]key_name set [@@mapping_name.]key_name  or  get @@mapping_name set @@mapping_name Here is an example: Let's set up three tables in the "containers" table: The first is a map to InnoDB table "test/demo_test" table with mapping name "setup_1" INSERT INTO containers VALUES ("setup_1", "test", "demo_test", "c1", "c2", "c3", "c4", "c5", "PRIMARY");  Similarly, we set up table mappings for table "test/new_demo" with name "setup_2" and that to table "mydatabase/my_demo" with name "setup_3": INSERT INTO containers VALUES ("setup_2", "test", "new_demo", "c1", "c2", "c3", "c4", "c5", "secondary_index_x"); INSERT INTO containers VALUES ("setup_3", "my_database", "my_demo", "c1", "c2", "c3", "c4", "c5", "idx"); To switch to table "my_database/my_demo", and get the value corresponding to “key_a”, user will do: get @@setup_3.key_a (this will also output the value that corresponding to key "key_a" or simply get @@setup_3 Once this is done, this connection will switch to "my_database/my_demo" table until another table mapping switch is requested. so it can continue issue regular command like: get key_b  set key_c 0 0 7 These DMLs will all be directed to "my_database/my_demo" table. And this also implies that different connections can have different bindings (to different table). 2) Delimiter: For the delimiter "." that separates the "mapping name" and key value, we also added a configure option in the "config_options" system table with name of "table_map_delimiter": INSERT INTO config_options VALUES("table_map_delimiter", "."); So if user wants to change to a different delimiter, they can change it in the config_option table. 3) Default mapping: Once we have multiple table mapping, there should be always a "default" map setting. For this, we decided if there exists a mapping name of "default", then this will be chosen as default mapping. Otherwise, the first row of the containers table will chosen as default setting. Please note, user tables can be repeated in the "containers" table (for example, user wants to access different columns of the table in different settings), as long as they are using different mapping/configure names in the first column, which is enforced by a unique index. 4) bind command In addition, we also extend the protocol and added a bind command, its usage is fairly straightforward. To switch to "setup_3" mapping above, you simply issue: bind setup_3 This will switch this connection's InnoDB table to "my_database/my_demo" In summary, with this feature, you now can direct access to difference tables with difference session. And even a single connection, you can query into difference tables. Background thread to auto-commit long running transactions This is a feature related to the “batch” concept we discussed in earlier blogs. This “batch” feature allows us batch the read and write operations, and commit them only after certain calls. The “batch” size is controlled by the configure parameter “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size”. This could significantly boost performance. However, it also comes with some disadvantages, for example, you will not be able to view “uncommitted” operations from SQL end unless you set transaction isolation level to read_uncommitted, and in addition, this will held certain row locks for extend period of time that might reduce the concurrency. To deal with this, we introduce a background thread that “auto-commits” the transaction if they are idle for certain amount of time (default is 5 seconds). The background thread will wake up every second and loop through every “connections” opened by Memcached, and check for idle transactions. And if such transaction is idle longer than certain limit and not being used, it will commit such transactions. This limit is configurable by change “innodb_api_bk_commit_interval”. Its default value is 5 seconds, and minimum is 1 second, and maximum is 1073741824 seconds. With the help of such background thread, you will not need to worry about long running uncommitted transactions when set daemon_memcached_w_batch_size and daemon_memcached_r_batch_size to a large number. This also reduces the number of locks that could be held due to long running transactions, and thus further increase the concurrency. Enhancement in binlog performance As you might all know, binlog operation is not done by InnoDB storage engine, rather it is handled in the MySQL layer. In order to support binlog operation through InnoDB Memcached, we would have to artificially create some MySQL constructs in order to access binlog handler APIs. In previous lab release, for simplicity consideration, we open and destroy these MySQL constructs (such as THD) for each operations. This required us to set the “batch” size always to 1 when binlog is on, no matter what “daemon_memcached_w_batch_size” and “daemon_memcached_r_batch_size” are configured to. This put a big restriction on our capability to scale, and also there are quite a bit overhead in creating destroying such constructs that bogs the performance down. With this release, we made necessary change that would keep MySQL constructs as long as they are valid for a particular connection. So there will not be repeated and redundant open and close (table) calls. And now even with binlog option is enabled (with innodb_api_enable_binlog,), we still can batch the transactions with daemon_memcached_w_batch_size and daemon_memcached_r_batch_size, thus scale the write/read performance. Although there are still overheads that makes InnoDB Memcached cannot perform as fast as when binlog is turned off. It is much better off comparing to previous release. And we are continuing optimize the solution is this area to improve the performance as much as possible. Performance Study: Amerandra of our System QA team have conducted some performance studies on queries through our InnoDB Memcached connection and plain SQL end. And it shows some interesting results. The test is conducted on a “Linux 2.6.32-300.7.1.el6uek.x86_64 ix86 (64)” machine with 16 GB Memory, Intel Xeon 2.0 GHz CPU X86_64 2 CPUs- 4 Core Each, 2 RAID DISKS (1027 GB,733.9GB). Results are described in following tables: Table 1: Performance comparison on Set operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8*** 5.6.7-RC* X faster Set (QPS) Set** 8 30,000 5,600 5.36 32 59,000 13,000 4.54 128 68,000 8,000 8.50 512 63,000 6.800 9.23 * mysql-5.6.7-rc-linux2.6-x86_64 ** The “set” operation when implemented in InnoDB Memcached involves a couple of DMLs: it first query the table to see whether the “key” exists, if it does not, the new key/value pair will be inserted. If it does exist, the “value” field of matching row (by key) will be updated. So when used in above query, it is a precompiled store procedure, and query will just execute such procedures. *** added “–daemon_memcached_option=-t8” (default is 4 threads) So we can see with this “set” query, InnoDB Memcached can run 4.5 to 9 time faster than MySQL server. Table 2: Performance comparison on Get operations Connections 5.6.7-RC-Memcached-plugin ( TPS / Qps) with memcached-threads=8 5.6.7-RC* X faster Get (QPS) Get 8 42,000 27,000 1.56 32 101,000 55.000 1.83 128 117,000 52,000 2.25 512 109,000 52,000 2.10 With the “get” query (or the select query), memcached performs 1.5 to 2 times faster than normal SQL. Summary: In summary, we added several much-desired features to InnoDB Memcached in this release, allowing user to operate on different tables with this Memcached interface. We also now provide a background commit thread to commit long running idle transactions, thus allow user to configure large batch write/read without worrying about large number of rows held or not being able to see (uncommit) data. We also greatly enhanced the performance when Binlog is enabled. We will continue making efforts in both performance enhancement and functionality areas to make InnoDB Memcached a good demo case for our InnoDB APIs. Jimmy Yang, September 29, 2012

    Read the article

  • Building Queries Systematically

    - by Jeremy Smyth
    The SQL language is a bit like a toolkit for data. It consists of lots of little fiddly bits of syntax that, taken together, allow you to build complex edifices and return powerful results. For the uninitiated, the many tools can be quite confusing, and it's sometimes difficult to decide how to go about the process of building non-trivial queries, that is, queries that are more than a simple SELECT a, b FROM c; A System for Building Queries When you're building queries, you could use a system like the following:  Decide which fields contain the values you want to use in our output, and how you wish to alias those fields Values you want to see in your output Values you want to use in calculations . For example, to calculate margin on a product, you could calculate price - cost and give it the alias margin. Values you want to filter with. For example, you might only want to see products that weigh more than 2Kg or that are blue. The weight or colour columns could contain that information. Values you want to order by. For example you might want the most expensive products first, and the least last. You could use the price column in descending order to achieve that. Assuming the fields you've picked in point 1 are in multiple tables, find the connections between those tables Look for relationships between tables and identify the columns that implement those relationships. For example, The Orders table could have a CustomerID field referencing the same column in the Customers table. Sometimes the problem doesn't use relationships but rests on a different field; sometimes the query is looking for a coincidence of fact rather than a foreign key constraint. For example you might have sales representatives who live in the same state as a customer; this information is normally not used in relationships, but if your query is for organizing events where sales representatives meet customers, it's useful in that query. In such a case you would record the names of columns at either end of such a connection. Sometimes relationships require a bridge, a junction table that wasn't identified in point 1 above but is needed to connect tables you need; these are used in "many-to-many relationships". In these cases you need to record the columns in each table that connect to similar columns in other tables. Construct a join or series of joins using the fields and tables identified in point 2 above. This becomes your FROM clause. Filter using some of the fields in point 1 above. This becomes your WHERE clause. Construct an ORDER BY clause using values from point 1 above that are relevant to the desired order of the output rows. Project the result using the remainder of the fields in point 1 above. This becomes your SELECT clause. A Worked Example   Let's say you want to query the world database to find a list of countries (with their capitals) and the change in GNP, using the difference between the GNP and GNPOld columns, and that you only want to see results for countries with a population greater than 100,000,000. Using the system described above, we could do the following:  The Country.Name and City.Name columns contain the name of the country and city respectively.  The change in GNP comes from the calculation GNP - GNPOld. Both those columns are in the Country table. This calculation is also used to order the output, in descending order To see only countries with a population greater than 100,000,000, you need the Population field of the Country table. There is also a Population field in the City table, so you'll need to specify the table name to disambiguate. You can also represent a number like 100 million as 100e6 instead of 100000000 to make it easier to read. Because the fields come from the Country and City tables, you'll need to join them. There are two relationships between these tables: Each city is hosted within a country, and the city's CountryCode column identifies that country. Also, each country has a capital city, whose ID is contained within the country's Capital column. This latter relationship is the one to use, so the relevant columns and the condition that uses them is represented by the following FROM clause:  FROM Country JOIN City ON Country.Capital = City.ID The statement should only return countries with a population greater than 100,000,000. Country.Population is the relevant column, so the WHERE clause becomes:  WHERE Country.Population > 100e6  To sort the result set in reverse order of difference in GNP, you could use either the calculation, or the position in the output (it's the third column): ORDER BY GNP - GNPOld or ORDER BY 3 Finally, project the columns you wish to see by constructing the SELECT clause: SELECT Country.Name AS Country, City.Name AS Capital,        GNP - GNPOld AS `Difference in GNP`  The whole statement ends up looking like this:  mysql> SELECT Country.Name AS Country, City.Name AS Capital, -> GNP - GNPOld AS `Difference in GNP` -> FROM Country JOIN City ON Country.Capital = City.ID -> WHERE Country.Population > 100e6 -> ORDER BY 3 DESC; +--------------------+------------+-------------------+ | Country            | Capital    | Difference in GNP | +--------------------+------------+-------------------+ | United States | Washington | 399800.00 | | China | Peking | 64549.00 | | India | New Delhi | 16542.00 | | Nigeria | Abuja | 7084.00 | | Pakistan | Islamabad | 2740.00 | | Bangladesh | Dhaka | 886.00 | | Brazil | Brasília | -27369.00 | | Indonesia | Jakarta | -130020.00 | | Russian Federation | Moscow | -166381.00 | | Japan | Tokyo | -405596.00 | +--------------------+------------+-------------------+ 10 rows in set (0.00 sec) Queries with Aggregates and GROUP BY While this system might work well for many queries, it doesn't cater for situations where you have complex summaries and aggregation. For aggregation, you'd start with choosing which columns to view in the output, but this time you'd construct them as aggregate expressions. For example, you could look at the average population, or the count of distinct regions.You could also perform more complex aggregations, such as the average of GNP per head of population calculated as AVG(GNP/Population). Having chosen the values to appear in the output, you must choose how to aggregate those values. A useful way to think about this is that every aggregate query is of the form X, Y per Z. The SELECT clause contains the expressions for X and Y, as already described, and Z becomes your GROUP BY clause. Ordinarily you would also include Z in the query so you see how you are grouping, so the output becomes Z, X, Y per Z.  As an example, consider the following, which shows a count of  countries and the average population per continent:  mysql> SELECT Continent, COUNT(Name), AVG(Population)     -> FROM Country     -> GROUP BY Continent; +---------------+-------------+-----------------+ | Continent     | COUNT(Name) | AVG(Population) | +---------------+-------------+-----------------+ | Asia          |          51 |   72647562.7451 | | Europe        |          46 |   15871186.9565 | | North America |          37 |   13053864.8649 | | Africa        |          58 |   13525431.0345 | | Oceania       |          28 |    1085755.3571 | | Antarctica    |           5 |          0.0000 | | South America |          14 |   24698571.4286 | +---------------+-------------+-----------------+ 7 rows in set (0.00 sec) In this case, X is the number of countries, Y is the average population, and Z is the continent. Of course, you could have more fields in the SELECT clause, and  more fields in the GROUP BY clause as you require. You would also normally alias columns to make the output more suited to your requirements. More Complex Queries  Queries can get considerably more interesting than this. You could also add joins and other expressions to your aggregate query, as in the earlier part of this post. You could have more complex conditions in the WHERE clause. Similarly, you could use queries such as these in subqueries of yet more complex super-queries. Each technique becomes another tool in your toolbox, until before you know it you're writing queries across 15 tables that take two pages to write out. But that's for another day...

    Read the article

  • Backing Up vs. Redundancy

    - by TK Kocheran
    I'm currently in stage 2 of 3 of building my home workstation. What this means is that my RAID-0 array of solid state disks will be backed up nightly to a RAID-5 or RAID-6 array of traditional spinning hard disks. However, it recently dawned on me that redundancy is not backup. The main reason for setting up a RAID array with redundancy was to protect myself in the event of a drive failure to serve as an effective backup solution. Wait. What if a bolt of lightning finds a way to travel into my house, through my surge-protector, into my power supply and physically destroys all of my hard disks and SSDs? Well, in that case, I guess I'd be fine because I generally keep most important files (music, pictures, videos) stored in multiple places like on my laptop, my wife's laptop, and an encrypted USB hard drive. Wait. What if a giant hedgehog meteor attacks my house from space traveling at mach 3 and all machines and hard disks are blown to smithereens. Well, I guess I could find a way to do ridiculously slow and cumbersome rsyncs or backups to Amazon's Glacier. Wait. What if there's a nuclear apocalypse... and at this point I start laughing hysterically. At what point does backing up become irrelevant? I completely understand situation one (mechanical drive failure), situation two (workstation compromised or destroyed somehow), possibly even situation three (all machines and disks destroyed), but situation four? There's no questioning the need for backups. None. However, there are three questions I'd really like addressed: To what level should one backup? I definitely understand the merits of physical disk redundancy. I also believe in keeping important files on multiple machines and thinning out the possibility of losing all of my files. Online backups make sense, but they beg the following question. What should I be backing up remotely and how often? It's no problem storage-wise to back up important files (music, pictures, videos) and even configuration and temporal data for all of the machines in my network (all Linux based)... albeit locally. Transferring to the cloud is another story. Worst-case scenario, if I lost all of my configuration for my individual computers, the reality is that I probably lost the machines too. The cloud is a long way away from here; I can run backups over CAT-6 here and see 100MB/s easily, but I'm afraid that I'm only going to see 2MB/s at best when transferring up to the cloud.

    Read the article

  • cannot make ubuntu 64-bit v12.04 install work

    - by honestann
    I decided it was time to update my ubuntu (single boot) computer from 64-bit v10.04 to 64-bit v12.04. Unfortunately, for some reason (or reasons) I just can't make it work. Note that I am attempting a fresh install of 64-bit v12.04 onto a new 3TB hard disk, not an upgrade of the 1TB hard disk that contains my working 64-bit v10.04 installation. To perform the attempted install of v12.04 I unplug the SATA cable from the 1TB drive and plug it into the 3TB drive (to avoid risking damage to my working v10.04 installation). I downloaded the ubuntu 64-bit v12.04 install DVD ISO file (~1.6 GB) from the ubuntu releases webpage and burned it onto a DVD. I have downloaded the DVD ISO file 3 times and burned 3 of these installation DVDs (twice with v10.04 and once with my winxp64 system), but none of them work. I run the "check disk" on the DVDs at the beginning of the installation process to assure the DVD is valid. When installation completes and the system boots the 3TB drive, it reports "unknown filesystem". After installation on the 250GB drives, the system boots up fine. During every install I plug the same SATA cable (sda) into only one disk drive (the 3TB or one of the 250GB drives) and leave the other disk drives unconnected (for simplicity). It is my understanding that 64-bit ubuntu (and 64-bit linux in general) has no problem with 3TB disk drives. In the BIOS I have tried having EFI set to "enabled" and "auto" with no apparent difference (no success). I never bothered setting the BIOS to "non-EFI". I have tried partitioning the drive in a few ways to see if that makes a difference, but so far it has not mattered. Typically I manually create partitions something like this: 8GB /boot ext4 8GB swap 3TB / ext4 But I've also tried the following, just in case it matters: 8GB boot efi 8GB swap 8GB /boot ext4 3TB / ext4 Note: In the partition dialog I specify bootup on the same drive I am partitioning and installing ubuntu v12.04 onto. It is a VERY DANGEROUS FACT that the default for this always comes up with the wrong drive (some other drive, generally the external drive). Unless I'm stupid or misunderstanding something, this is very wrong and very dangerous default behavior. Note: If I connect the SATA cable to the 1TB drive that has been my ubuntu 64-bit v10.04 system drive for the past 2 years, it boots up and runs fine. I guess there must be a log file somewhere, and maybe it gives some hints as to what the problem is. I should be able to boot off the 1TB drive with the 3TB drive connected as a secondary (non-boot) drive and get the log file, assuming there is one and someone tells me the name (and where to find it if the name is very generic). After installation on the 3TB drive completes and the system reboots, the following prints out on a black screen: Loading Operating System ... Boot from CD/DVD : Boot from CD/DVD : error: unknown filesystem grub rescue> Note: I have two DVD burners in the system, hence the duplicate line above. Note: I install and boot 64-bit ubuntu v12.04 on both of my 250GB in this same system, but still cannot make the 3TB drive boot. Sigh. Any ideas? ========== motherboard == gigabyte 990FXA-UD7 CPU == AMD FX-8150 8-core bulldozer @ 3.6 GHz RAM == 8GB of DDR3 in 2 sticks (matched pair) HDD == seagate 3TB SATA3 @ 7200 rpm (new install 64-bit v12.04 FAILS) HDD == seagate 1TB SATA3 @ 7200 rpm (64-bit v10.04 WORKS for two years) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) HDD == seagate 250GB SATA2 @ 7200 rpm (new install 64-bit v12.04 WORKS) GPU == nvidia GTX-285 ??? == no overclocking or other funky business USB == external seagate 2TB HDD for making backups DVD == one bluray burner (SATA) DVD == one DVD burner (SATA) 64-bit ubuntu v10.04 has booted and run fine on the seagate 1TB drive for 2 years.

    Read the article

  • Surface Review from Canadian Guy Who Didn&rsquo;t Go To Build

    - by D'Arcy Lussier
    I didn’t go to Build last week, opted to stay home and go trick-or-treating with my daughters instead. I had many friends that did go however, and I was able to catch up with James Chambers last night to hear about the conference and play with his Surface RT and Nokia 920 WP8 devices. I’ve been using Windows 8 for a while now, so I’m not going to comment on OS features – lots of posts out there on that already. Let me instead comment on the hardware itself. Size and Weight The size of the tablet was awesome. The Windows 8 tablet I’m using to reference this against is the one from Build 2011 (Samsung model) we received as well as my iPad. The Surface RT was taller and slightly heavier than the iPad, but smaller and lighter than the Samsung Win 8 tablet. I still don’t prefer the default wide-screen format, but the Surface RT is much more usable even when holding it by the long edge than the Samsung. Build Quality No issues with the build quality, it seemed very solid. But…y’know, people have been going on about how the Surface RT materials are so much better than the plastic feeling models Samsung and others put out. I didn’t really notice *that* much difference in that regard with the Surface RT. Interesting feature I didn’t expect – the Windows button on the device is touch-sensitive, not a mechanical one. I didn’t try video or anything, so I can’t comment on the media experience. The kickstand is a great feature, and the way the Surface RT connects to the combo case/keyboard touchcover is very slick while being incredibly simple. What About That Touch Cover Keyboard? So first, kudos to Microsoft on the touch cover! This thing was insanely responsive (including the trackpad) and really delivered on the thinness I was expecting. With that said, and remember this is with very limited use, I would probably go with the Type Cover instead of the Touch Cover. The difference is buttons. The Touch Cover doesn’t actually have “buttons” on the keyboard – hence why its a “touch” cover. You tap on a key to type it. James tells me after a while you get used to it and you can type very fast. For me, I just prefer the tactile feeling of a button being pressed/depressed. But still – typing on the touch case worked very well. Would I Buy One? So after playing with it, did I cry out in envy and rage that I wasn’t able to get one of these machines? Did I curse my decision to collect Halloween candy with my kids instead of being at Build getting hardware? Well – no. Even with the keyboard, the Surface RT is not a business laptop replacement device. While Office does come included, you can’t install any other applications outside of Windows Store Apps. This might be limiting depending on what other applications you need to have available on your computer. Surface RT is a great personal computing device, as long as you’re not already invested in a competing ecosystem. I’ve heard people make statements that they’re going to replace all the iPads in their homes with Surface tablets. In my home, that’s not feasible – my wife and daughters have amassed quite a collection of games via iTunes. We also buy all our music via iTunes as well, so even with the XBox streaming music service now available we’re still tied quite tightly to iTunes. So who is the Surface RT for? In my mind, if you’re looking for a solid, compact device that provides basic business functionality (read: email) or if you have someone that needs a very simple to use computer for email, web browsing, etc., then Surface RT is a great option. For me, I’m waiting on the Samsung Ativ Smart PC Pro and am curious to see what changes the Surface Pro will come with.

    Read the article

  • ASP.NET Server-side comments

    - by nmarun
    I believe a good number of you know about Server-side commenting. This blog is just like a revival to refresh your memories. When you write comments in your .aspx/.ascx files, people usually write them as: 1: <!-- This is a comment. --> To show that it actually makes a difference for using the server-side commenting technique, I’ve started a web application project and my default.aspx page looks like this: 1: <%@ Page Title="Home Page" Language="C#" MasterPageFile="~/Site.master" AutoEventWireup="true" CodeBehind="Default.aspx.cs" Inherits="ServerSideComment._Default" %> 2: <asp:Content ID="HeaderContent" runat="server" ContentPlaceHolderID="HeadContent"> 3: </asp:Content> 4: <asp:Content ID="BodyContent" runat="server" ContentPlaceHolderID="MainContent"> 5: <h2> 6: <!-- This is a comment --> 7: Welcome to ASP.NET! 8: </h2> 9: <p> 10: To learn more about ASP.NET visit <a href="http://www.asp.net" title="ASP.NET Website">www.asp.net</a>. 11: </p> 12: <p> 13: You can also find <a href="http://go.microsoft.com/fwlink/?LinkID=152368&amp;clcid=0x409" 14: title="MSDN ASP.NET Docs">documentation on ASP.NET at MSDN</a>. 15: </p> 16: </asp:Content> See the comment in line 6 and when I run the app, I can do a view source on the browser which shows up as: 1: <h2> 2: <!-- This is a comment --> 3: Welcome to ASP.NET! 4: </h2> Using Fiddler shows the page size as: Let’s change the comment style and use server-side commenting technique. 1: <h2> 2: <%-- This is a comment --%> 3: Welcome to ASP.NET! 4: </h2> Upon rendering, the view source looks like: 1: <h2> 2: 3: Welcome to ASP.NET! 4: </h2> Fiddler now shows the page size as: The difference is that client-side comments are ignored by the browser, but they are still sent down the pipe. With server-side comments, the compiler ignores everything inside this block. Visual Studio’s Text Editor toolbar also puts comments as server-side ones. If you want to give it a shot, go to your design page and press Ctrl+K, Ctrl+C on some selected text and you’ll see it commented in the server-side commenting style.

    Read the article

  • Friday Fun: The Search For Wondla

    - by Asian Angel
    The best day of the week is finally here again, so it is time to have some fun while waiting to go home for the weekend. The game we have for you today takes you far into humanity’s future where you journey with Eva Nine in her quest to find other humans. Note: Today’s game comes with a double bonus! First, there is a sequel game that you can move on to once you have completed the first one. Second, there are three wallpapers available in multiple sizes for those who enjoy the characters and artwork presented in the game (see below). The Search For Wondla The object of the game is to find the differences between two similar looking images based on artwork from The Search For Wondla by Tony DiTerlizzi. Are you ready to join Eva Nine in her quest to find other humans in the future? Note: There is a version available for those who would like to play The Search For Wondla on their iPads! The first game has 28 levels of difference finding goodness for you to work through. Each level will list the minimum number of differences that you need to find to progress to the next level. If you need a hint along the way just click on the Shake or Reveal options at the bottom of the game play window. Get a level completed quickly enough and you get bonus points! There will also be differences in the images for individual levels each time you play the game, so have fun! Note: The second game has 12 levels to complete. To give you a good feel for the game we have covered the first six levels here and provided seven clues for each level (you are only required to find a minimum of five). Eva Nine viewing the holographic outdoor projections in the main hub of her living quarters… Eva Nine is in a grumpy mood as Muthr visits her at bedtime… Eva Nine in her secret hideaway visiting old “childhood friends” as she contemplates her recent survival test failure. Eva Nine viewing the entire set of floor plans for the underground sanctuary where she was born and has been growing up. Eva Nine’s escape to the surface as the underground sanctuary is attacked by the bounty hunter creature Besteel. Eva Nine on the surface for the first time in her young life. Will she be successful in her quest? There is only one way to find out! Play The Search For Wondla Part 1 Play The Search For Wondla Part 2 Bonus Content If you have enjoyed this game you can learn more about the book and download the three wallpapers shown here by visiting the link below! Note: The wallpapers come in the following sizes: 1024*768, 1280*800, 1280*1024, 1440*900, iPhone, iPhone4, and iPad (click on the Extras link at the bottom of the page). Visit the Search For Wondla Homepage Do you enjoy playing difference finding games? Then you will definitely want to have a look at another wonderful game that we have covered here: Friday Fun: Isis Latest Features How-To Geek ETC The How-To Geek Guide to Learning Photoshop, Part 8: Filters Get the Complete Android Guide eBook for Only 99 Cents [Update: Expired] Improve Digital Photography by Calibrating Your Monitor The How-To Geek Guide to Learning Photoshop, Part 7: Design and Typography How to Choose What to Back Up on Your Linux Home Server How To Harmonize Your Dual-Boot Setup for Windows and Ubuntu Hang in There Scrat! – Ice Age Wallpaper How Do You Know When You’ve Passed Geek and Headed to Nerd? On The Tip – A Lamborghini Theme for Chrome and Iron What if Wile E. Coyote and the Road Runner were Human? [Video] Peaceful Winter Cabin Wallpaper Store Tabs for Later Viewing in Opera with Tab Vault

    Read the article

  • &lsquo;Publish&hellip;&rsquo; Resulting in Directory With No Files

    - by ToStringTheory
    I was pulling my hair out with this one…  Which isn’t good considering I have so little of it left!  I had just upgraded to the Windows Azure 1.7 SDK the day before with no problems, and used the upgraded ‘Publish…’ dialog to successfully publish a website to my hard disk for hosting on an internal development server.  However, when trying to deploy another project to my file system, it said it was successful, but there were no files in the directory.  The only difference, the first project was an Azure project, the second was a standard ASP.Net Web Application.  If you installed the Windows Azure 1.7 SDK, you may want to read this. The Problem At first it appears that there is no problem: However you may remember that when publishing a web application, the output window will generally iterate through each of the directories as it copies the files from that directory over.  Sure enough, when looking at the output directory – there are no files, no bin directory, no nothing… Troubleshooting Since one site published and the other did not, I believed that the failure may have been to a failed SQL Server 2012 installation that happened between publish.  I rolled back the installation, however that did not work either.  I also checked the Configuration Manager dialog, and ensured that the projects were selected to actually build (just checking, even though the output said it built them..)  I checked the properties of the solution and the projects, and a selection of files in the project to make sure that they were selected for content…  Nothing seemed to work. I then decided to uninstall the Azure 1.7 SDK to see if that was the culprit.  When I opened the Windows 7 ‘Uninstall a Program’ dialog, I noticed that the Azure SDK came with 2 extra packages that just so happen to be in a Release Candidate state from Microsoft – ‘Microsoft Web Deploy 3.0’ and ‘Microsoft Web Publish – Visual Studio 2010’.  It dawned on me that the publish dialog must not be just for Azure, since it appeared when I tried to deploy the regular web application as well.  Therefore, it must have been an upgrade to the publish mechanism in Visual Studio.  I uninstalled both of the programs and received my old publish dialog once again, and was able to successfully publish the solution above as I had done before. After celebrating solving the problem, I tried reinstalling the Azure package, to see if it would repair the publishing process. Even though it brought back the updated dialogs, it did not publish any files. Instead of uninstalling and retreating, I now KNEW what the cause was, and these were packages not just for Azure. I now knew a product name to search for. The Solution Sure enough, with the correct search term in Google – ‘microsoft web publish no files’, and setting the timeline to 1 week, I found what I needed - Microsoft Connect - Publish Web Application FAILS! (by Andrew Rits). I am surprised that I missed something that ended up being so simple…  In the Configuration Manager, I had the following settings: This is how I had been building and debugging the solution always…  However, apparently when installing the new Web Publishing package, it does things a little differently in its configuration for publishing: You see the difference?  The configuration here is set to ‘x86’ instead of ‘Any CPU’.  Sure enough, as soon as I switched the configuration to ‘Release – Any CPU’, the deployment built and published all of my files as I expected. Conclusion It was a small change, but apparently the new ‘Publish web application’ defaults to the x86 configuration, thereby not copying any of the project/bin files to the publish target directory.  I spent forever trying things, but this small drop down eluded me until I was able to target that the dialog was actually working apparently, I just didn’t have the correct configuration. I hope that this saves you the hours of frustration and hastened hair loss that it caused me…  I also hope that before Microsoft brings this publishing package out of RC status, that they change the behavior of that menu to default to the settings of the old publish menu for the first time. Happy Coding!

    Read the article

< Previous Page | 100 101 102 103 104 105 106 107 108 109 110 111  | Next Page >