Search Results

Search found 18566 results on 743 pages for 'query hints'.

Page 243/743 | < Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >

  • Wordpress Search Results in Order

    - by Brad Houston
    One of my clients websites, www.kevinsplants.co.uk is not showing the search results in alphabetical order, how do I go about ordering the results in alphabetical order? We are using the Shopp plugin and I believe its that plugin that is generating the results! Cheers, Brad case "orderby-list": if (isset($Shopp->Category->controls)) return false; if (isset($Shopp->Category->smart)) return false; $menuoptions = Category::sortoptions(); $title = ""; $string = ""; $default = $Shopp->Settings->get('default_product_order'); if (empty($default)) $default = "title"; if (isset($options['default'])) $default = $options['default']; if (isset($options['title'])) $title = $options['title']; if (value_is_true($options['dropdown'])) { if (isset($Shopp->Cart->data->Category['orderby'])) $default = $Shopp->Cart->data->Category['orderby']; $string .= $title; $string .= '<form action="'.esc_url($_SERVER['REQUEST_URI']).'" method="get" id="shopp-'.$Shopp->Category->slug.'-orderby-menu">'; if (!SHOPP_PERMALINKS) { foreach ($_GET as $key => $value) if ($key != 'shopp_orderby') $string .= '<input type="hidden" name="'.$key.'" value="'.$value.'" />'; } $string .= '<select name="shopp_orderby" class="shopp-orderby-menu">'; $string .= menuoptions($menuoptions,$default,true); $string .= '</select>'; $string .= '</form>'; $string .= '<script type="text/javascript">'; $string .= "jQuery('#shopp-".$Shopp->Category->slug."-orderby-menu select.shopp-orderby-menu').change(function () { this.form.submit(); });"; $string .= '</script>'; } else { if (strpos($_SERVER['REQUEST_URI'],"?") !== false) list($link,$query) = explode("\?",$_SERVER['REQUEST_URI']); $query = $_GET; unset($query['shopp_orderby']); $query = http_build_query($query); if (!empty($query)) $query .= '&'; foreach($menuoptions as $value => $option) { $label = $option; $href = esc_url($link.'?'.$query.'shopp_orderby='.$value); $string .= '<li><a href="'.$href.'">'.$label.'</a></li>'; } } return $string; break;

    Read the article

  • MySQL - How do I insert an additional where clause into this full-text search (updated)

    - by Steven
    I want to add a WHERE clause to a full text search query (to limit to past 24 hours), but wherever I insert it I get Low Level Error. Is it possible to add the clause and if so, how? Here is the code WITHOUT the where clause: $query = "SELECT *, MATCH (story_title) AGAINST ('$query' IN BOOLEAN MODE) AS Relevance FROM stories WHERE MATCH (story_title) AGAINST ('+$query' IN BOOLEAN MODE) HAVING Relevance > 0.2 ORDER BY Relevance DESC, story_time DESC;

    Read the article

  • hibernate annotation- extending base class - values are not being set - strange error

    - by gt_ebuddy
    I was following Hibernate: Use a Base Class to Map Common Fields and openjpa inheritance tutorial to put common columns like ID, lastModifiedDate etc in base table. My annotated mappings are as follow : BaseTable : @MappedSuperclass public abstract class BaseTable { @Id @GeneratedValue @Column(name = "id") private int id; @Column(name = "lastmodifieddate") private Date lastModifiedDate; ... Person table - @Entity @Table(name = "Person ") public class Person extends BaseTable implements Serializable{ ... Create statement generated : create table Person (id integer not null auto_increment, lastmodifieddate datetime, name varchar(255), primary key (id)) ; After I save a Person object to db, Person p = new Person(); p.setName("Test"); p.setLastModifiedDate(new Date()); .. getSession().save(p); I am setting the date field but, it is saving the record with generated ID and LastModifiedDate = null, and Name="Test". Insert Statement : insert into Person (lastmodifieddate, name) values (?, ?) binding parameter [1] as [TIMESTAMP] - <null> binding parameter [2] as [VARCHAR] - Test Read by ID query : When I do hibernate query (get By ID) as below, It reads person by given ID. Criteria c = getSession().createCriteria(Person.class); c.add(Restrictions.eq("id", id)); Person person= c.list().get(0); //person has generated ID, LastModifiedDate is null select query select person0_.id as id8_, person0_.lastmodifieddate as lastmodi8_, person0_.name as person8_ from Person person0_ - Found [1] as column [id8_] - Found [null] as column [lastmodi8_] - Found [Test] as column [person8_] ReadAll query : //read all Query query = getSession().createQuery("from " + Person.class.getName()); List allPersons=query.list(); Corresponding SQL for read all select query select person0_.id as id8_, person0_.lastmodifieddate as lastmodi8_, person0_.name as person8_ from Person person0_ - Found [1] as column [id8_] - Found [null] as column [lastmodi8_] - Found [Test] as column [person8_] - Found [2] as column [id8_] - Found [null] as column [lastmodi8_] - Found [Test2] as column [person8_] But when I print out the list in console, its being more weird. it is selecting List of Person object with ID fields = all 0 (why all 0 ?) LastModifiedDate = null Name fields have valid values I don't know whats wrong here. Could you please look at it? FYI, My Hibernate-core version : 4.1.2, MySQL Connector version : 5.1.9 . In summary, There are two issues here Why I am getting All ID Fields =0 when using read all? Why the LastModifiedDate is not being inserted?

    Read the article

  • Does Entity Framework currently support custom functions in the Where clause?

    - by André Pena
    Entity Framework currently supports table valued functions and custom functions defined in the SSDL, but I can't find any example of it being used as a criteria, in the where clause. Example: var query = this.db.People; query = query.Where(p = FullText.ContainsInName(p.Id, "George")); In this example, ContainsInName is my custom function that I want to be executed in the where clause of the query. Is it supported?

    Read the article

  • How to user IN operator in Linq?

    - by Umapathy
    Query: Select * from pu_Products where Part_Number in ('031832','027861', '028020', '033378') and User_Id = 50 and Is_Deleted = 0 The above mentioned query is in SQL and i need the query might be converted into Linq. Is there any option using the "IN" operator in Linq?. can you convert above mentioned query into Linq?

    Read the article

  • I need help with a SQL query. Fetching an entry, it's most recent revision and it's fields.

    - by Tigger ate my dad
    Hi there, I'm building a CMS for my own needs, and finished planning my database layout. Basically I am abstracting all possible data-models into "sections" and all entries into one table. The final layout is as follows: Database diagram: I have yet to be allowed to post images, so here is a link to a diagram of my database. Entries (section_entries) are children of their section (sections). I save all edits to the entries in a new revision (section_entries_revisions), and also track revisions on the sections (section_revisions), in order to match the values of a revision, to the fields of the section that existed when the entry-revision was made. The section-revisions can have a number of fields (section_revision_fields) that define the attributes of entries in the section. There is a many-to-many relationship between the fields (section_revision_fields) and the entry-revisions (section_entry_revisions), that stores the values of the attributes defined by the section revision. Feel free to ask questions if the diagram is confusing. Now, this is the most complex SQL I've ever worked with, and the task of fetching my data is a little daunting. Basically what i want help with, is fetching an entry, when the only known variables are; section_id, section_entry_id. The query should fetch the most recent revision of that entry, and the section_revision model corresponding to section_revision_id in the section_entry_revisions table. It should also fetch the values of the fields in the section-revision. I was hoping for a query result, where there would be as many rows as fields in the section. Each row would contain the information of the entry and the section, and then information for one of the fields (e.g. each row corresponding to a field and it's value). I tried to explain the best I could. Again, feel free to ask questions if my description somehow lacking. I hope someone is up for the challenge. Best regards. :-)

    Read the article

  • finding if an anniversary is coming up in n days in MySql

    - by user151841
    I have a table with anniversary dates. I want a query that returns me rows of anniversaries coming up in the next 10 days. For instance: birthdate --------- 1965-10-10 1982-05-25 SELECT birthdate FROM Anniversaries WHERE mystical_magical_mumbo_jumbo <= 10 +------------+ | birthdate | +------------+ | 1982-05-25 | +------------+ 1 row in set (0.01 sec) I'd like to keep the query in the form x <= 10, because I'll use that number 10 in other parts of the query, and if I set it to a variable, I can change it once everywhere by changing the variable, and not have to re-write the query.

    Read the article

  • LINQ to SQL selecting fields

    - by user3686904
    I am trying to populate more columns in the query below, could someone assist me? QUERY: var query = from r in SQLresults.AsEnumerable() group r by r.Field<string>("COLUMN_ONE") into groupedTable select new { c1 = groupedTable.Key, c2 = groupedTable.Sum(s => s.Field<decimal>("COLUMN_TWO")), }; How could I get a column named COLUMN_THREE in this query ? Thanks in advance

    Read the article

  • MX records not correctly updated by the Google DNS servers

    - by Mac_Cain13
    We are currently losing some e-mail and we discovered that this is caused by a wrong DNS setting. We used a CNAME for our MX record an thats not allowed. So about 2 weeks ago we changed it to an A-record to fix the problem. It seems all major DNS services (like OpenDNS and ISPs) have synced their records and are returning correct results on our DNS queries. But Googles DNS service (at 8.8.8.8) is still returning the CNAME values and we still some e-mails are not delivered correctly. Query on OpenDNS: ; <<>> DiG 9.7.3-P3 <<>> mx wrep.nl @208.67.222.222 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 51231 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;wrep.nl. IN MX ;; ANSWER SECTION: wrep.nl. 3595 IN MX 10 druif.wrep.nl. ;; Query time: 21 msec ;; SERVER: 208.67.222.222#53(208.67.222.222) ;; WHEN: Fri Nov 25 21:36:58 2011 ;; MSG SIZE rcvd: 47 Query on Google DNS: ; <<>> DiG 9.7.3-P3 <<>> mx wrep.nl @8.8.8.8 ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 12124 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 1, ADDITIONAL: 0 ;; QUESTION SECTION: ;wrep.nl. IN MX ;; ANSWER SECTION: wrep.nl. 2372 IN CNAME druif.wrep.nl. ;; AUTHORITY SECTION: wrep.nl. 572 IN SOA ns0.freshdns.nl. hostmaster.twilightinc.nl. 2011112401 14400 3600 604800 3600 ;; Query time: 94 msec ;; SERVER: 8.8.8.8#53(8.8.8.8) ;; WHEN: Fri Nov 25 21:38:10 2011 ;; MSG SIZE rcvd: 117 So is there anyone who can explain why Google is responding with a different (incorrect) result two weeks after the last change? And how can we get Google to update their DNS records correctly? Any help is very appreciated. (Please note that other domains that are managed by the same DNS servers/tools are working fine.)

    Read the article

  • Inheritance Mapping Strategies with Entity Framework Code First CTP5 Part 1: Table per Hierarchy (TPH)

    - by mortezam
    A simple strategy for mapping classes to database tables might be “one table for every entity persistent class.” This approach sounds simple enough and, indeed, works well until we encounter inheritance. Inheritance is such a visible structural mismatch between the object-oriented and relational worlds because object-oriented systems model both “is a” and “has a” relationships. SQL-based models provide only "has a" relationships between entities; SQL database management systems don’t support type inheritance—and even when it’s available, it’s usually proprietary or incomplete. There are three different approaches to representing an inheritance hierarchy: Table per Hierarchy (TPH): Enable polymorphism by denormalizing the SQL schema, and utilize a type discriminator column that holds type information. Table per Type (TPT): Represent "is a" (inheritance) relationships as "has a" (foreign key) relationships. Table per Concrete class (TPC): Discard polymorphism and inheritance relationships completely from the SQL schema.I will explain each of these strategies in a series of posts and this one is dedicated to TPH. In this series we'll deeply dig into each of these strategies and will learn about "why" to choose them as well as "how" to implement them. Hopefully it will give you a better idea about which strategy to choose in a particular scenario. Inheritance Mapping with Entity Framework Code FirstAll of the inheritance mapping strategies that we discuss in this series will be implemented by EF Code First CTP5. The CTP5 build of the new EF Code First library has been released by ADO.NET team earlier this month. EF Code-First enables a pretty powerful code-centric development workflow for working with data. I’m a big fan of the EF Code First approach, and I’m pretty excited about a lot of productivity and power that it brings. When it comes to inheritance mapping, not only Code First fully supports all the strategies but also gives you ultimate flexibility to work with domain models that involves inheritance. The fluent API for inheritance mapping in CTP5 has been improved a lot and now it's more intuitive and concise in compare to CTP4. A Note For Those Who Follow Other Entity Framework ApproachesIf you are following EF's "Database First" or "Model First" approaches, I still recommend to read this series since although the implementation is Code First specific but the explanations around each of the strategies is perfectly applied to all approaches be it Code First or others. A Note For Those Who are New to Entity Framework and Code-FirstIf you choose to learn EF you've chosen well. If you choose to learn EF with Code First you've done even better. To get started, you can find a great walkthrough by Scott Guthrie here and another one by ADO.NET team here. In this post, I assume you already setup your machine to do Code First development and also that you are familiar with Code First fundamentals and basic concepts. You might also want to check out my other posts on EF Code First like Complex Types and Shared Primary Key Associations. A Top Down Development ScenarioThese posts take a top-down approach; it assumes that you’re starting with a domain model and trying to derive a new SQL schema. Therefore, we start with an existing domain model, implement it in C# and then let Code First create the database schema for us. However, the mapping strategies described are just as relevant if you’re working bottom up, starting with existing database tables. I’ll show some tricks along the way that help you dealing with nonperfect table layouts. Let’s start with the mapping of entity inheritance. -- The Domain ModelIn our domain model, we have a BillingDetail base class which is abstract (note the italic font on the UML class diagram below). We do allow various billing types and represent them as subclasses of BillingDetail class. As for now, we support CreditCard and BankAccount: Implement the Object Model with Code First As always, we start with the POCO classes. Note that in our DbContext, I only define one DbSet for the base class which is BillingDetail. Code First will find the other classes in the hierarchy based on Reachability Convention. public abstract class BillingDetail  {     public int BillingDetailId { get; set; }     public string Owner { get; set; }             public string Number { get; set; } } public class BankAccount : BillingDetail {     public string BankName { get; set; }     public string Swift { get; set; } } public class CreditCard : BillingDetail {     public int CardType { get; set; }                     public string ExpiryMonth { get; set; }     public string ExpiryYear { get; set; } } public class InheritanceMappingContext : DbContext {     public DbSet<BillingDetail> BillingDetails { get; set; } } This object model is all that is needed to enable inheritance with Code First. If you put this in your application you would be able to immediately start working with the database and do CRUD operations. Before going into details about how EF Code First maps this object model to the database, we need to learn about one of the core concepts of inheritance mapping: polymorphic and non-polymorphic queries. Polymorphic Queries LINQ to Entities and EntitySQL, as object-oriented query languages, both support polymorphic queries—that is, queries for instances of a class and all instances of its subclasses, respectively. For example, consider the following query: IQueryable<BillingDetail> linqQuery = from b in context.BillingDetails select b; List<BillingDetail> billingDetails = linqQuery.ToList(); Or the same query in EntitySQL: string eSqlQuery = @"SELECT VAlUE b FROM BillingDetails AS b"; ObjectQuery<BillingDetail> objectQuery = ((IObjectContextAdapter)context).ObjectContext                                                                          .CreateQuery<BillingDetail>(eSqlQuery); List<BillingDetail> billingDetails = objectQuery.ToList(); linqQuery and eSqlQuery are both polymorphic and return a list of objects of the type BillingDetail, which is an abstract class but the actual concrete objects in the list are of the subtypes of BillingDetail: CreditCard and BankAccount. Non-polymorphic QueriesAll LINQ to Entities and EntitySQL queries are polymorphic which return not only instances of the specific entity class to which it refers, but all subclasses of that class as well. On the other hand, Non-polymorphic queries are queries whose polymorphism is restricted and only returns instances of a particular subclass. In LINQ to Entities, this can be specified by using OfType<T>() Method. For example, the following query returns only instances of BankAccount: IQueryable<BankAccount> query = from b in context.BillingDetails.OfType<BankAccount>() select b; EntitySQL has OFTYPE operator that does the same thing: string eSqlQuery = @"SELECT VAlUE b FROM OFTYPE(BillingDetails, Model.BankAccount) AS b"; In fact, the above query with OFTYPE operator is a short form of the following query expression that uses TREAT and IS OF operators: string eSqlQuery = @"SELECT VAlUE TREAT(b as Model.BankAccount)                       FROM BillingDetails AS b                       WHERE b IS OF(Model.BankAccount)"; (Note that in the above query, Model.BankAccount is the fully qualified name for BankAccount class. You need to change "Model" with your own namespace name.) Table per Class Hierarchy (TPH)An entire class hierarchy can be mapped to a single table. This table includes columns for all properties of all classes in the hierarchy. The concrete subclass represented by a particular row is identified by the value of a type discriminator column. You don’t have to do anything special in Code First to enable TPH. It's the default inheritance mapping strategy: This mapping strategy is a winner in terms of both performance and simplicity. It’s the best-performing way to represent polymorphism—both polymorphic and nonpolymorphic queries perform well—and it’s even easy to implement by hand. Ad-hoc reporting is possible without complex joins or unions. Schema evolution is straightforward. Discriminator Column As you can see in the DB schema above, Code First has to add a special column to distinguish between persistent classes: the discriminator. This isn’t a property of the persistent class in our object model; it’s used internally by EF Code First. By default, the column name is "Discriminator", and its type is string. The values defaults to the persistent class names —in this case, “BankAccount” or “CreditCard”. EF Code First automatically sets and retrieves the discriminator values. TPH Requires Properties in SubClasses to be Nullable in the Database TPH has one major problem: Columns for properties declared by subclasses will be nullable in the database. For example, Code First created an (INT, NULL) column to map CardType property in CreditCard class. However, in a typical mapping scenario, Code First always creates an (INT, NOT NULL) column in the database for an int property in persistent class. But in this case, since BankAccount instance won’t have a CardType property, the CardType field must be NULL for that row so Code First creates an (INT, NULL) instead. If your subclasses each define several non-nullable properties, the loss of NOT NULL constraints may be a serious problem from the point of view of data integrity. TPH Violates the Third Normal FormAnother important issue is normalization. We’ve created functional dependencies between nonkey columns, violating the third normal form. Basically, the value of Discriminator column determines the corresponding values of the columns that belong to the subclasses (e.g. BankName) but Discriminator is not part of the primary key for the table. As always, denormalization for performance can be misleading, because it sacrifices long-term stability, maintainability, and the integrity of data for immediate gains that may be also achieved by proper optimization of the SQL execution plans (in other words, ask your DBA). Generated SQL QueryLet's take a look at the SQL statements that EF Code First sends to the database when we write queries in LINQ to Entities or EntitySQL. For example, the polymorphic query for BillingDetails that you saw, generates the following SQL statement: SELECT  [Extent1].[Discriminator] AS [Discriminator],  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift],  [Extent1].[CardType] AS [CardType],  [Extent1].[ExpiryMonth] AS [ExpiryMonth],  [Extent1].[ExpiryYear] AS [ExpiryYear] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] IN ('BankAccount','CreditCard') Or the non-polymorphic query for the BankAccount subclass generates this SQL statement: SELECT  [Extent1].[BillingDetailId] AS [BillingDetailId],  [Extent1].[Owner] AS [Owner],  [Extent1].[Number] AS [Number],  [Extent1].[BankName] AS [BankName],  [Extent1].[Swift] AS [Swift] FROM [dbo].[BillingDetails] AS [Extent1] WHERE [Extent1].[Discriminator] = 'BankAccount' Note how Code First adds a restriction on the discriminator column and also how it only selects those columns that belong to BankAccount entity. Change Discriminator Column Data Type and Values With Fluent API Sometimes, especially in legacy schemas, you need to override the conventions for the discriminator column so that Code First can work with the schema. The following fluent API code will change the discriminator column name to "BillingDetailType" and the values to "BA" and "CC" for BankAccount and CreditCard respectively: protected override void OnModelCreating(System.Data.Entity.ModelConfiguration.ModelBuilder modelBuilder) {     modelBuilder.Entity<BillingDetail>()                 .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue("BA"))                 .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue("CC")); } Also, changing the data type of discriminator column is interesting. In the above code, we passed strings to HasValue method but this method has been defined to accepts a type of object: public void HasValue(object value); Therefore, if for example we pass a value of type int to it then Code First not only use our desired values (i.e. 1 & 2) in the discriminator column but also changes the column type to be (INT, NOT NULL): modelBuilder.Entity<BillingDetail>()             .Map<BankAccount>(m => m.Requires("BillingDetailType").HasValue(1))             .Map<CreditCard>(m => m.Requires("BillingDetailType").HasValue(2)); SummaryIn this post we learned about Table per Hierarchy as the default mapping strategy in Code First. The disadvantages of the TPH strategy may be too serious for your design—after all, denormalized schemas can become a major burden in the long run. Your DBA may not like it at all. In the next post, we will learn about Table per Type (TPT) strategy that doesn’t expose you to this problem. References ADO.NET team blog Java Persistence with Hibernate book a { text-decoration: none; } a:visited { color: Blue; } .title { padding-bottom: 5px; font-family: Segoe UI; font-size: 11pt; font-weight: bold; padding-top: 15px; } .code, .typeName { font-family: consolas; } .typeName { color: #2b91af; } .padTop5 { padding-top: 5px; } .padTop10 { padding-top: 10px; } p.MsoNormal { margin-top: 0in; margin-right: 0in; margin-bottom: 10.0pt; margin-left: 0in; line-height: 115%; font-size: 11.0pt; font-family: "Calibri" , "sans-serif"; }

    Read the article

  • Using Oracle Proxy Authentication with JPA (eclipselink-Style)

    - by olaf.heimburger
    Security is a very intriguing topic. You will find it everywhere and you need to implement it everywhere. Yes, you need. Unfortunately, one can easily forget it while implementing the last mile. The Last Mile In a multi-tier application it is a common practice to use connection pools between the business layer and the database layer. Connection pools are quite useful to speed database connection creation and to split the load. Another very common practice is to use a specific, often called technical, user to connect to the database. This user has authentication and authorization rules that apply to all application users. Imagine you've put every effort to define roles for different types of users that use your application. These roles are necessary to differentiate between normal users, premium users, and administrators (I bet you will find or already have more roles in your application). While these user roles are pretty well used within your application, once the flow of execution enters the database everything is gone. Each and every user just has one role and is the same database user. Issues? What Issues? As long as things go well, this is not a real issue. However, things do not go well all the time. Once your application becomes famous performance decreases in certain situations or, more importantly, current and upcoming regulations and laws require that your application must be able to apply different security measures on a per user role basis at every stage of your application. If you only have a bunch of users with the same name and role you are not able to find the application usage profile that causes the performance issue, or which user has accessed data that he/she is not allowed to. Another thread to your role concept is that databases tend to be used by different applications and tools. These tools can be developer tools like SQL*Plus, SQL Developer, etc. or end user applications like BI Publisher, Oracle Forms and so on. These tools have no idea of your applications role concept and access the database the way they think is appropriate. A big oversight for your perfect role model and a big nightmare for your Chief Security Officer. Speaking of the CSO, brings up another issue: Password management. Once your technical user account is compromised, every user is able to do things that he/she is not expected to do from the design of your application. Counter Measures In the Oracle world a common counter measure is to use Virtual Private Database (VPD). This restricts the values a database user can see to the allowed minimum. However, it doesn't help in regard of a connection pool user, because this one is still not the real user. Oracle Proxy Authentication Another feature of the Oracle database is Proxy Authentication. First introduced with version 9i it is a quite useful feature for nearly every situation. The main idea behind Proxy Authentication is, to create a crippled database user who has only connect rights. Even if this user is compromised the risks are well understood and fairly limited. This user can be used in every situation in which you need to connect to the database, no matter which tool or application (see above) you use.The proxy user is perfect for multi-tier connection pools. CREATE USER app_user IDENTIFIED BY abcd1234; GRANT CREATE SESSION TO app_user; But what if you need to access real data? Well, this is the primary use case, isn't it? Now is the time to bring the application's role concept into play. You define database roles that define the grants for your identified user groups. Once you have these groups you grant access through the proxy user with the application role to the specific user. CREATE ROLE app_role_a; GRANT app_role_a TO scott; ALTER USER scott GRANT CONNECT THROUGH app_user WITH ROLE app_role_a; Now, hr has permission to connect to the database through the proxy user. Through the role you can restrict the hr's rights the are needed for the application only. If hr connects to the database directly all assigned role and permissions apply. Testing the Setup To test the setup you can use SQL*Plus and connect to your database: $ sqlplus app_user[hr]/abcd1234 Java Persistence API The Java Persistence API (JPA) is a fairly easy means to build applications that retrieve data from the database and put it into Java objects. You use plain old Java objects (POJOs) and mixin some Java annotations that define how the attributes of the object are used for storing data from the database into the Java object. Here is a sample for objects from the HR sample schema EMPLOYEES table. When using Java annotations you only specify what can not be deduced from the code. If your Java class name is Employee but the table name is EMPLOYEES, you need to specify the table name, otherwise it will fail. package demo.proxy.ejb; import java.io.Serializable; import java.sql.Timestamp; import java.util.List; import javax.persistence.Column; import javax.persistence.Entity; import javax.persistence.Id; import javax.persistence.JoinColumn; import javax.persistence.ManyToOne; import javax.persistence.NamedQueries; import javax.persistence.NamedQuery; import javax.persistence.OneToMany; import javax.persistence.Table; @Entity @NamedQueries({ @NamedQuery(name = "Employee.findAll", query = "select o from Employee o") }) @Table(name = "EMPLOYEES") public class Employee implements Serializable { @Column(name="COMMISSION_PCT") private Double commissionPct; @Column(name="DEPARTMENT_ID") private Long departmentId; @Column(nullable = false, unique = true, length = 25) private String email; @Id @Column(name="EMPLOYEE_ID", nullable = false) private Long employeeId; @Column(name="FIRST_NAME", length = 20) private String firstName; @Column(name="HIRE_DATE", nullable = false) private Timestamp hireDate; @Column(name="JOB_ID", nullable = false, length = 10) private String jobId; @Column(name="LAST_NAME", nullable = false, length = 25) private String lastName; @Column(name="PHONE_NUMBER", length = 20) private String phoneNumber; private Double salary; @ManyToOne @JoinColumn(name = "MANAGER_ID") private Employee employee; @OneToMany(mappedBy = "employee") private List employeeList; public Employee() { } public Employee(Double commissionPct, Long departmentId, String email, Long employeeId, String firstName, Timestamp hireDate, String jobId, String lastName, Employee employee, String phoneNumber, Double salary) { this.commissionPct = commissionPct; this.departmentId = departmentId; this.email = email; this.employeeId = employeeId; this.firstName = firstName; this.hireDate = hireDate; this.jobId = jobId; this.lastName = lastName; this.employee = employee; this.phoneNumber = phoneNumber; this.salary = salary; } public Double getCommissionPct() { return commissionPct; } public void setCommissionPct(Double commissionPct) { this.commissionPct = commissionPct; } public Long getDepartmentId() { return departmentId; } public void setDepartmentId(Long departmentId) { this.departmentId = departmentId; } public String getEmail() { return email; } public void setEmail(String email) { this.email = email; } public Long getEmployeeId() { return employeeId; } public void setEmployeeId(Long employeeId) { this.employeeId = employeeId; } public String getFirstName() { return firstName; } public void setFirstName(String firstName) { this.firstName = firstName; } public Timestamp getHireDate() { return hireDate; } public void setHireDate(Timestamp hireDate) { this.hireDate = hireDate; } public String getJobId() { return jobId; } public void setJobId(String jobId) { this.jobId = jobId; } public String getLastName() { return lastName; } public void setLastName(String lastName) { this.lastName = lastName; } public String getPhoneNumber() { return phoneNumber; } public void setPhoneNumber(String phoneNumber) { this.phoneNumber = phoneNumber; } public Double getSalary() { return salary; } public void setSalary(Double salary) { this.salary = salary; } public Employee getEmployee() { return employee; } public void setEmployee(Employee employee) { this.employee = employee; } public List getEmployeeList() { return employeeList; } public void setEmployeeList(List employeeList) { this.employeeList = employeeList; } public Employee addEmployee(Employee employee) { getEmployeeList().add(employee); employee.setEmployee(this); return employee; } public Employee removeEmployee(Employee employee) { getEmployeeList().remove(employee); employee.setEmployee(null); return employee; } } JPA could be used in standalone applications and Java EE containers. In both worlds you normally create a Facade to retrieve or store the values of the Entities to or from the database. The Facade does this via an EntityManager which will be injected by the Java EE container. Here is sample Facade Session Bean for a Java EE container. package demo.proxy.ejb; import java.util.HashMap; import java.util.List; import javax.ejb.Local; import javax.ejb.Remote; import javax.ejb.Stateless; import javax.persistence.EntityManager; import javax.persistence.PersistenceContext; import javax.persistence.Query; import javax.interceptor.AroundInvoke; import javax.interceptor.InvocationContext; import oracle.jdbc.driver.OracleConnection; import org.eclipse.persistence.config.EntityManagerProperties; import org.eclipse.persistence.internal.jpa.EntityManagerImpl; @Stateless(name = "DataFacade", mappedName = "ProxyUser-TestEJB-DataFacade") @Remote @Local public class DataFacadeBean implements DataFacade, DataFacadeLocal { @PersistenceContext(unitName = "TestEJB") private EntityManager em; private String username; public Object queryByRange(String jpqlStmt, int firstResult, int maxResults) { // setSessionUser(); Query query = em.createQuery(jpqlStmt); if (firstResult 0) { query = query.setFirstResult(firstResult); } if (maxResults 0) { query = query.setMaxResults(maxResults); } return query.getResultList(); } public Employee persistEmployee(Employee employee) { // setSessionUser(); em.persist(employee); return employee; } public Employee mergeEmployee(Employee employee) { // setSessionUser(); return em.merge(employee); } public void removeEmployee(Employee employee) { // setSessionUser(); employee = em.find(Employee.class, employee.getEmployeeId()); em.remove(employee); } /** select o from Employee o */ public List getEmployeeFindAll() { Query q = em.createNamedQuery("Employee.findAll"); return q.getResultList(); } Putting Both Together To use Proxy Authentication with JPA and within a Java EE container you have to take care of the additional requirements: Use an OCI JDBC driver Provide the user name that connects through the proxy user Use an OCI JDBC driver To use the OCI JDBC driver you need to set up your JDBC data source file to use the correct JDBC URL. hr jdbc:oracle:oci8:@(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SID=XE))) oracle.jdbc.OracleDriver user app_user 62C32F70E98297522AD97E15439FAC0E SQL SELECT 1 FROM DUAL jdbc/hrDS Application Additionally you need to make sure that the version of the shared libraries of the OCI driver match the version of the JDBC driver in your Java EE container or Java application and are within your PATH (on Windows) or LD_LIBRARY_PATH (on most Unix-based systems). Installing the Oracle Database Instance Client software works perfectly. Provide the user name that connects through the proxy user This part needs some modification of your application software and session facade. Session Facade Changes In the Session Facade we must ensure that every call that goes through the EntityManager must be prepared correctly and uniquely assigned to this session. The second is really important, as the EntityManager works with a connection pool and can not guarantee that we set the proxy user on the connection that will be used for the database activities. To avoid changing every method call of the Session Facade we provide a method to set the username of the user that connects through the proxy user. This method needs to be called by the Facade client bfore doing anything else. public void setUsername(String name) { username = name; } Next we provide a means to instruct the TopLink EntityManager Delegate to use Oracle Proxy Authentication. (I love small helper methods to hide the nitty-gritty details and avoid repeating myself.) private void setSessionUser() { setSessionUser(username); } private void setSessionUser(String user) { if (user != null && !user.isEmpty()) { EntityManagerImpl emDelegate = ((EntityManagerImpl)em.getDelegate()); emDelegate.setProperty(EntityManagerProperties.ORACLE_PROXY_TYPE, OracleConnection.PROXYTYPE_USER_NAME); emDelegate.setProperty(OracleConnection.PROXY_USER_NAME, user); emDelegate.setProperty(EntityManagerProperties.EXCLUSIVE_CONNECTION_MODE, "Always"); } } The final step is use the EJB 3.0 AroundInvoke interceptor. This interceptor will be called around every method invocation. We therefore check whether the Facade methods will be called or not. If so, we set the user for proxy authentication and the normal method flow continues. @AroundInvoke public Object proxyInterceptor(InvocationContext invocationCtx) throws Exception { if (invocationCtx.getTarget() instanceof DataFacadeBean) { setSessionUser(); } return invocationCtx.proceed(); } Benefits Using Oracle Proxy Authentification has a number of additional benefits appart from implementing the role model of your application: Fine grained access control for temporary users of the account, without compromising the original password. Enabling database auditing and logging. Better identification of performance bottlenecks. References Effective Oracle Database 10g Security by Design, David Knox TopLink Developer's Guide, Chapter 98

    Read the article

  • How to display a dependent list box disabled if no child data exist

    - by frank.nimphius
    A requirement on OTN was to disable the dependent list box of a model driven list of value configuration whenever the list is empty. To disable the dependent list, the af:selectOneChoice component needs to be refreshed with every value change of the parent list, which however already is the case as the list boxes are already dependent. When you create model driven list of values as choice lists in an ADF Faces page, two ADF list bindings are implicitly created in the PageDef file of the page that hosts the input form. At runtime, a list binding is an instance of FacesCtrlListBinding, which exposes getItems() as a method to access a list of available child data (java.util.List). Using Expression Language, the list is accessible with #{bindings.list_attribute_name.items} To dynamically set the disabled property on the dependent af:selectOneChoice component, however, you need a managed bean that exposes the following two methods //empty – but required – setter method public void setIsEmpty(boolean isEmpty) {} //the method that returns true/false when the list is empty or //has values public boolean isIsEmpty() {   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory exprFactory =                          fctx.getApplication().getExpressionFactory();   ValueExpression vexpr =                       exprFactory.createValueExpression(elctx,                         "#{bindings.EmployeeId.items}",                       Object.class);   List employeesList = (List) vexpr.getValue(elctx);                        return employeesList.isEmpty()? true : false;      } If referenced from the dependent choice list, as shown below, the list is disabled whenever it contains no list data <! --  master list --> <af:selectOneChoice value="#{bindings.DepartmentId.inputValue}"                                  label="#{bindings.DepartmentId.label}"                                  required="#{bindings.DepartmentId.hints.mandatory}"                                   shortDesc="#{bindings.DepartmentId.hints.tooltip}"                                   id="soc1" autoSubmit="true">      <f:selectItems value="#{bindings.DepartmentId.items}" id="si1"/> </af:selectOneChoice> <! --  dependent  list --> <af:selectOneChoice value="#{bindings.EmployeeId.inputValue}"                                   label="#{bindings.EmployeeId.label}"                                      required="#{bindings.EmployeeId.hints.mandatory}"                                   shortDesc="#{bindings.EmployeeId.hints.tooltip}"                                   id="soc2" disabled="#{lovTestbean.isEmpty}"                                   partialTriggers="soc1">     <f:selectItems value="#{bindings.EmployeeId.items}" id="si2"/> </af:selectOneChoice>

    Read the article

  • Day 3 of Oracle OpenWorld 2012 October 2nd

    - by Maria Colgan
    Hopefully you enjoyed yesterday, the first full day of technical sessions at Oracle OpenWorld and are ready for more today! Today we give our first technical session, Oracle Optimizer: Harnessing the Power of Optimizer Hints (Session CON8455) at 1:15pm, in Moscone South - room 103. In this session we will discuss in detail how Optimizer hints are interpreted, when they should be used, why they appear to be ignored and what you can do if you have inherited a hint ridden application. The Optimizer team will also be at the Oracle Database Demogrounds all day.  Demogrounds open at 9:45 am and run until 6pm. So stop by and find out what's new with the Optimizer and the statistics that feed it. Don't forget to pick up your Optimizer bumper sticker while you are there! +Maria Colgan

    Read the article

  • Established coding standards for pl/pgsql code

    - by jb01
    I need to standardize coding practises for project that compromises, among others has pl/pgsql database, that has some amount of nontrivial code. I look for: Code formatting guidelines, especially inside procedures. Guidelines on what constructs are consigered unsafe (if any) Naming coventions. Code documentation conventions (if this is pracicised) Any hints to documets that define good practises in pl/pgsql code? If not i'm looking for hints to practices that you consider good. There is related question regarding TSQL: Can anyone recommend coding standards for TSQL?, which is relevant to psql as well, but I need more information on stored procedures. Other related questions: http://stackoverflow.com/questions/1070275/what-indenting-style-do-you-use-in-sql-server-stored-procedures

    Read the article

  • How much help should I give during technical interviews?

    - by kojiro
    I'm asked to perform or sit in during many technical interviews. We ask logic questions and simple programming problems that the interviewee is expected to be able to solve on paper. (I would rather they have access to a keyboard, but that is a problem for another time.) Sometimes I sense that people do know how to approach a problem, but they are hung up by nervousness or some second-guessing of the question (they aren't intended to be trick questions). I've never heard my boss give any help or hints. He just thanks the interviewee for the response (no matter how good or bad it is) and moves on to the next question or problem. But I know something about the rabbit hole that defeat and nerves can lead you down, and how it disables your mind, and I can't help wondering if providing a little help now and then would ultimately help us end up with more capable programmers instead of more failed interviews. Should I provide hints and assistance for befuddled interviewees (and if so, how far should I go while still being fair to the more prepared candidates)?

    Read the article

  • NetBeans IDE 7.3 Knows Null

    - by Geertjan
    What's the difference between these two methods, "test1" and "test2"? public int test1(String str) {     return str.length(); } public int test2(String str) {     if (str == null) {         System.err.println("Passed null!.");         //forgotten return;     }     return str.length(); } The difference, or at least, the difference that is relevant for this blog entry, is that whoever wrote "test2" apparently thinks that the variable "str" may be null, though did not provide a null check. In NetBeans IDE 7.3, you see this hint for "test2", but no hint for "test1", since in that case we don't know anything about the developer's intention for the variable and providing a hint in that case would flood the source code with too many false positives:  Annotations are supported in understanding how a piece of code is intended to be used. If method return types use @Nullable, @NullAllowed, @CheckForNull, the value is considered to be "strongly possible to be null", as well as if the variable is tested to be null, as shown above. When using @NotNull, @NonNull, @Nonnull, the value is considered to be non-null. (The exact FQNs of the annotations are ignored, only simple names are checked.) Here are examples showing where the hints are displayed for the non-null hints (the "strongly possible to be null" hints are not shown below, though you can see one of them in the screenshot above), together with a comment showing what is shown when you hover over the hint: There isn't a "one size fits all" refactoring for these various instances relating to null checks, hence you can't do an automated refactoring across your code base via tools in NetBeans IDE, as shown yesterday for class member reordering across code bases. However, you can, instead, go to Source | Inspect and then do a scan throughout a scope (e.g., current file/package/project or combinations of these or all open projects) for class elements that the IDE identifies as potentially having a problem in this area: Thanks to Jan Lahoda, who reports that this currently also works in NetBeans IDE 7.3 dev builds for fields but that may need to be disabled since right now too many false positives are returned, for help with the info above and any misunderstandings are my own fault!

    Read the article

  • How does this main domain have a CNAME record?

    - by TRiG
    I was under the impression that only subdomains could have CNAME records: main domains need to define all their own records. However, apt-get.com seems to have only a CNAME record. How can this work? $ dig apt-get.com ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 45743 ;; flags: qr rd ra; QUERY: 1, ANSWER: 9, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;apt-get.com. IN A ;; ANSWER SECTION: apt-get.com. 86336 IN CNAME thie5ku9.dsgeneration.com. thie5ku9.dsgeneration.com. 60 IN A 208.73.211.242 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.246 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.166 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.232 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.161 thie5ku9.dsgeneration.com. 60 IN A 208.73.210.233 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.186 thie5ku9.dsgeneration.com. 60 IN A 208.73.211.188 ;; Query time: 59 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Tue Jun 10 15:05:48 2014 ;; MSG SIZE rcvd: 193 $ dig apt-get.com ns ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ns ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 43831 ;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 0 ;; QUESTION SECTION: ;apt-get.com. IN NS ;; Query time: 26 msec ;; SERVER: 127.0.0.1#53(127.0.0.1) ;; WHEN: Tue Jun 10 15:12:37 2014 ;; MSG SIZE rcvd: 29 $ dig apt-get.com ns @b.gtld-servers.net ; <<>> DiG 9.8.1-P1 <<>> apt-get.com ns @b.gtld-servers.net ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 38228 ;; flags: qr rd; QUERY: 1, ANSWER: 0, AUTHORITY: 2, ADDITIONAL: 2 ;; WARNING: recursion requested but not available ;; QUESTION SECTION: ;apt-get.com. IN NS ;; AUTHORITY SECTION: apt-get.com. 172800 IN NS ns1.domainrecover.com. apt-get.com. 172800 IN NS ns2.domainrecover.com. ;; ADDITIONAL SECTION: ns1.domainrecover.com. 172800 IN A 66.45.232.66 ns2.domainrecover.com. 172800 IN A 65.23.159.179 ;; Query time: 70 msec ;; SERVER: 192.33.14.30#53(192.33.14.30) ;; WHEN: Tue Jun 10 15:07:05 2014 ;; MSG SIZE rcvd: 111 The domain does resolve. I get the following headers: GET / HTTP/1.1 User-Agent: Testing_Sniffer/4.15 Host: apt-get.com Accept: */* HTTP/1.0 200 (OK) Cache-Control: private, no-cache, must-revalidate Connection: Keep-Alive Pragma: no-cache Server: Oversee Turing v1.0.0 Content-Length: 1347 Content-Type: text/html Expires: Mon, 26 Jul 1997 05:00:00 GMT Keep-Alive: timeout=3, max=96 P3P: policyref="http://www.dsparking.com/w3c/p3p.xml", CP="NOI DSP COR ADMa OUR NOR STA" Set-Cookie: parkinglot=1; domain=.apt-get.com; path=/; expires=Wed, 11-Jun-2014 14:10:37 GMT <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Frameset//EN" "http://www.w3.org/TR/html4/frameset.dtd"> <!-- turing_cluster_prod --> <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>apt-get.com</title> <meta name="keywords" content="apt-get.com" /> <meta name="description" content="apt-get.com" /> <meta name="robots" content="index, follow" /> <meta name="revisit-after" content="10" /> <meta name="viewport" content="width=device-width, initial-scale=1.0" /> <script type="text/javascript"> document.cookie = "jsc=1"; </script> </head> <frameset rows="100%,*" frameborder="no" border="0" framespacing="0"> <frame src="http://apt-get.com?epl=5PfLSSqWrYDAt-gbwMDK_rA3b1UJCYVTJHfxTzr9FTDQV84b6vAgVhU3FTeCRQNiuRNv79Ni0V3mkEVNRhpqo2gpMjp5iOIR1w2_EISPENaqzoXohVXl2QI3ryXlRCB4FaIIaxynnWXWY6QBgBgNiIZ6agD1NBoNGg0ajXpUCXUAIJDer78AAOB_AwAAQIDbCwAAe_NWlVlTJllBMTZoWkKPAAAA8A" name="apt-get.com"> </frameset> <noframes> <body><a href="http://apt-get.com?epl=5PfLSSqWrYDAt-gbwMDK_rA3b1UJCYVTJHfxTzr9FTDQV84b6vAgVhU3FTeCRQNiuRNv79Ni0V3mkEVNRhpqo2gpMjp5iOIR1w2_EISPENaqzoXohVXl2QI3ryXlRCB4FaIIaxynnWXWY6QBgBgNiIZ6agD1NBoNGg0ajXpUCXUAIJDer78AAOB_AwAAQIDbCwAAe_NWlVlTJllBMTZoWkKPAAAA8A">Click here to go to apt-get.com</a>.</body> </noframes> </html>

    Read the article

  • Software Architecture

    - by Roger
    I have a question about Software Architecture, anyone can help me or give me some hints currently, I have a J2EE project which deploys in a server, I should a Java Standard project(J2SE) should run 24 hours x 7 days to monitor something it could not run separately, because the Java Project shared the some same classes such as Java Bean classes with the J2EE project maybe my design is not correct, can anyone suggest me what should I do? Using SOA? is this correct? my current solution is run this java project using a bash, but I dont think it is then best idea. I list my class packages com.company.alteck com.company.altronics com.company.gamming com.company.jaycar com.company.jup com.company.rpg com.company.sansai com.company.wiretech com.company.yatsal com.ebay.api com.ebay.bean com.ebay.credential com.ozsstock.finals com.ozstock.adapter com.ozstock.aspectj com.ozstock.model com.ozstock.persistence com.ozstock.service com.ozstock.suppliers my structure likes this, all the packages contains "company" should run separately, but depends on the model bean class. can anyone give me some hints to redesign?

    Read the article

  • How can I handle parameterized queries in Drupal?

    - by Anthony Gatlin
    We have a client who is currently using Lotus Notes/Domino as their content management system and web server. For many reasons, we are recommending they sunset their Notes/Domino implementation and transition onto a more modern platform--such as Drupal. The client has several web applications which would be a natural fit for Drupal. However, I am unsure of the best way to implement one of the web applications in Drupal. I am running into a knowledge barrier and wondered if any of you could fill in the gaps. Situation The client has a Lotus Domino application which serves as a front-end for querying a large DB2 data store and returning a result set (generally in table form) to a user via the web. The web application provides access to approximately 100 pre-defined queries--50 of which are public and 50 of which are secured. Most of the queries accept some set of user selected parameters as input. The output of the queries is typically returned to users in a list (table) format. A limited number of result sets allow drill-down through the HTML table into detail records. The query parameters often involve database queries themselves. For example, a single query may pull a list of company divisions into a drop-down. Once a division is selected, second drop-down with the departments from that division is populated--but perhaps only departments which meet some special criteria--such as those having taken a loss within a specific time frame. Most queries have 2-4 parameters with the average probably being 3. The application involves no data entry. None of the back-end data is ever modified by the web application. All access is purely based around querying data and viewing results. The queries change relatively infrequently, and the current system has been in place for approximately 10 years. There may be 10-20 query additions, modifications, or other changes in a given year. The client simply desires to change the presentation platform but absolutely does not want to re-do the 100 database queries. Once the project is implemented, the client wants their staff to take over and manage future changes. The client's staff have no background in Drupal or PHP but are somewhat willing to learn as necessary. How would you transition this into Drupal? My major knowledge void relates to how we would manage the query parameters and access the queries themselves. Here are a few specific questions but feel free to chime in on any issue related to this implementation. Would we have to build 100 forms by hand--with each form containing the parameters for a given query? If so, how would we do this? Approximately how long would it take to build/configure each of these forms? Is there a better way than manually building 100 forms? (I understand using CCK to enter data into custom content types but since we aren't adding any nodes, I am a little stuck as to how this might work.) Would it be possible for the internal staff to learn to create these query parameter forms--even if they are unfamiliar with Drupal today? Would they be required to do any PHP programming? How would we take the query parameters from a form and execute a query against DB2? Would this require a custom module? If so, would it require one module total or one module per query? (Note: There is apparently a DB2 driver available for Drupal. See http://groups.drupal.org/node/5511.) Note: I am not looking for CMS recommendations other than Drupal as Drupal nicely fits all of the client's other requirements, and I hope to help them standardize on a single platform. Any assistance you can provide would be helpful. Thank you in advance for your help!

    Read the article

  • SQL - date variable isn't being parsed correctly?

    - by Bill Sambrone
    I am pulling a list of invoices filtered by a starting and ending date, and further filtered by type of invoice from a SQL table. When I specify a range of 2013-07-01 through 2013-09-30 I am receiving 2 invoices per company when I expect 3. When I use the built in select top 1000 query in SSMS and add my date filters, all the expected invoices appear. Here is my fancy query that I'm using that utilizing variables that are fed in: DECLARE @ReportStart datetime DECLARE @ReportStop datetime SET @ReportStart = '2013-07-01' SET @ReportStop = '2013-09-30' SELECT Entity_Company.CompanyName, Reporting_AgreementTypes.Description, Reporting_Invoices.InvoiceAmount, ISNULL(Reporting_ProductCost.ProductCost,0), (Reporting_Invoices.InvoiceAmount - ISNULL(Reporting_ProductCost.ProductCost,0)), (Reporting_AgreementTypes.Description + Entity_Company.CompanyName), Reporting_Invoices.InvoiceDate FROM Reporting_Invoices JOIN Entity_Company ON Entity_Company.ClientID = Reporting_Invoices.ClientID LEFT JOIN Reporting_ProductCost ON Reporting_ProductCost.InvoiceNumber =Reporting_Invoices.InvoiceNumber JOIN Reporting_AgreementTypes ON Reporting_AgreementTypes.AgreementTypeID = Reporting_Invoices.AgreementTypeID WHERE Reporting_Invoices.AgreementTypeID = (SELECT AgreementTypeID FROM Reporting_AgreementTypes WHERE Description = 'Resold Services') AND Reporting_Invoices.InvoiceDate >= @ReportStart AND Reporting_Invoices.InvoiceDate <= @ReportStop ORDER BY CompanyName,InvoiceDate The above only returns 2 invoices per company. When I run a much more basic query through SSMS I get 3 as expected, which looks like: SELECT TOP 1000 [InvoiceID] ,[AgreementID] ,[AgreementTypeID] ,[InvoiceDate] ,[Comment] ,[InvoiceAmount] ,[InvoiceNumber] ,[TicketID] ,Entity_Company.CompanyName FROM Reporting_Invoices JOIN Entity_Company ON Entity_Company.ClientID = Reporting_Invoices.ClientID WHERE Entity_Company.ClientID = '9' AND AgreementTypeID = (SELECT AgreementTypeID FROM Reporting_AgreementTypes WHERE Description = 'Resold Services') AND Reporting_Invoices.InvoiceDate >= '2013-07-01' AND Reporting_Invoices.InvoiceDate <= '2013-09-30' ORDER BY InvoiceDate DESC I've tried stripping down the 1st query to include only a client ID on the original invoice table, the invoice date, and nothing else. Still only get 2 invoices instead of the expected 3. I've also tried manually entering the dates instead of the @ variables, same result. I confirmed that InvoiceDate is defined as a datetime in the table. I've tried making all JOIN's a FULL JOIN to see if anything is hiding, but no change. Here is how I stripped down the original query to keep all other tables out of the mix and yet I'm still getting only 2 invoices per client ID instead of 3 (I manually entered the ID for the type filter): --DECLARE @ReportStart datetime --DECLARE @ReportStop datetime --SET @ReportStart = '2013-07-01' --SET @ReportStop = '2013-09-30' SELECT --Entity_Company.CompanyName, --Reporting_AgreementTypes.Description, Reporting_Invoices.ClientID, Reporting_Invoices.InvoiceAmount, --ISNULL(Reporting_ProductCost.ProductCost,0), --(Reporting_Invoices.InvoiceAmount - ISNULL(Reporting_ProductCost.ProductCost,0)), --(Reporting_AgreementTypes.Description + Entity_Company.CompanyName), Reporting_Invoices.InvoiceDate FROM Reporting_Invoices --JOIN Entity_Company ON Entity_Company.ClientID = Reporting_Invoices.ClientID --LEFT JOIN Reporting_ProductCost ON Reporting_ProductCost.InvoiceNumber = Reporting_Invoices.InvoiceNumber --JOIN Reporting_AgreementTypes ON Reporting_AgreementTypes.AgreementTypeID = Reporting_Invoices.AgreementTypeID WHERE Reporting_Invoices.AgreementTypeID = '22'-- (SELECT AgreementTypeID FROM Reporting_AgreementTypes WHERE Description = 'Resold Services') AND Reporting_Invoices.InvoiceDate >= '2013-07-01' AND Reporting_Invoices.InvoiceDate <= '2013-09-30' ORDER BY ClientID,InvoiceDate This strikes me as really weird as it is pretty much the same query as the SSMS generated one that returns correct results. What am I overlooking? UPDATE I've further refined my "test query" that is returning only 2 invoices per company to help troubleshoot this. Below is the query and a relevant subset of data for 1 company from the appropriate tables: SELECT Reporting_Invoices.ClientID, Reporting_AgreementTypes.Description, Reporting_Invoices.InvoiceAmount, Reporting_Invoices.InvoiceDate FROM Reporting_Invoices JOIN Reporting_AgreementTypes ON Reporting_AgreementTypes.AgreementTypeID = Reporting_Invoices.AgreementTypeID WHERE Reporting_Invoices.AgreementTypeID = (SELECT AgreementTypeID FROM Reporting_AgreementTypes WHERE Description = 'Resold Services') AND Reporting_Invoices.InvoiceDate >= '2013-07-01T00:00:00' AND Reporting_Invoices.InvoiceDate <= '2013-09-30T00:00:00' ORDER BY Reporting_Invoices.ClientID,InvoiceDate The above only returns 2 invoices. Here is the relevant table data: Relevant data from Reporting_AgreementTypes AgreementTypeID Description 22 Resold Services Relevant data from Reporting_Invoices InvoiceID ClientID AgreementID AgreementTypeID InvoiceDate 16111 9 757 22 2013-09-30 00:00:00.000 15790 9 757 22 2013-08-30 00:00:00.000 15517 9 757 22 2013-07-31 00:00:00.000 Actual results from my new modified query ClientID Description InvoiceAmount InvoiceDate 9 Resold Services 3513.79 7/31/13 00:00:00 9 Resold Services 3570.49 8/30/13 00:00:00

    Read the article

  • PHP setting cookies in a child class

    - by steve
    I am writing a custom session handler and for the life of me I cannot get a cookie to set in it. I'm not outputting anything to the browser before I set the cookie but it still doesn't work. Its killing me. The cookie will set if I set it in the script I define and call on the session handler with. If necessary I will post code. Any ideas people? <?php /* require the needed classes comment out what is not needed */ require_once("classes/sessionmanager.php"); require_once("classes/template.php"); require_once("classes/database.php"); $title=" "; //titlebar of the web browser $description=" "; $keywords=" "; //meta keywords $menutype="default"; //default or customer, customer is elevated $pagetitle="dflsfsf "; //title of the webpage $pagebody=" "; //body of the webpage $template=template::def_instance(); $database=database::def_instance(); $session=sessionmanager::def_instance(); $session->sessions(); session_start(); ?> and this is the one that actually sets the cookie for the session function write($session_id,$session_data) { $session_id = mysql_real_escape_string($session_id); $session_data = mysql_real_escape_string(serialize($session_data)); $expires = time() + 3600; $user_ip = $_SERVER['REMOTE_ADDR']; $bol = FALSE; $time = time(); $newsession = FALSE; $auth = FALSE; $query = "SELECT * FROM 'sessions' WHERE 'expires' > '$time'"; $sessions_result = $this->query($query); $newsession = $this->newsession_check($session_id,$sessions_result); while($sessions_array = mysql_fetch_array($sessions_result) AND $auth = FALSE) { $session_array = $this->strip($session_array); $auth = $this->auth_check($session_array,$session_id); } /* this is an authentic session. build queries and update it */ if($auth = TRUE AND $newsession = FALSE) { $session_data = mysql_real_escape_string($session_data); $update_query1 = "UPDATE 'sessions' SET 'user_ip' = '$user_ip' WHERE 'session_id' = '$session_id'"; $update_query2 = "UPDATE 'sessions' SET 'data' = '$session_data' WHERE 'session_id = '$session_id'"; $update_query3 = "UPDATE 'sessions' SET 'expires' = '$expires' WHERE 'session_id' = '$session_id'"; $this->query($update_query1); $this->query($update_query2); $this->query($update_query3); $bol = TRUE; } elseif($newsession = TRUE) { /* this is a new session, build and create it */ $random_number = $this->obtain_random(); $cookieval = hash("sha512",$random_number); setcookie("rndn",$cookieval); $query = "INSERT INTO sessions VALUES('$session_id','0','$user_ip','$random_number','$session_data','$expires')"; $this->query($query); //echo $cookieval."this is the cookie <<"; $bol = TRUE; } return $bol; }

    Read the article

  • Best way to program a call to php

    - by hairdresser-101
    I've recently posted here http://stackoverflow.com/questions/2627645/accessing-session-when-using-file-get-contents-in-php about a problem I was having and the general consensus is that I'm not doing it right... while I generally think "as long as it works..." I thought I'd get some feedback on how I could do it better... I was to send the exact same email in the exact same format from multiple different areas. When a job is entered (automatically as a part of the POST) Manually when reviewing jobs to re-assign to another installer The original script is a php page which is called using AJAX to send the work order request - this worked by simply calling a standard php page, returning the success or error message and then displaying within the calling page. Now I have tried to use the same page within the automated job entry so it accepts the job via a form, logs it and mails it. My problem is (as you can see from the original post) the function file_get_contents() is not good for this cause in the automated script... My problem is that from an AJAX call I need to do things like include the database connection initialiser, start the session and do whatever else needs to be done in a standalone page... Some or all of these are not required if it is an include so it makes the file only good for one purpose... How do I make the file good for both purposes? I guess I'm looking for recommendations for the best file layout and structure to cater for both scenarios... The current file looks like: <?php session_start(); $order_id = $_GET['order_id']; include('include/database.php'); function getLineItems($order_id) { $query = mysql_query("SELECT ...lineItems..."); //Print rows with data while($row = mysql_fetch_object($query)) { $lineItems .= '...Build Line Item String...'; } return $lineItems; } function send_email($order_id) { //Get data for current job to display $query = mysql_query("SELECT ...Job Details..."); $row = mysql_fetch_object($query); $subject = 'Work Order Request'; $email_message = '...Build Email... ...Include Job Details... '.getLineItems($order_id).' ...Finish Email...'; $headers = '...Create Email Headers...'; if (mail($row->primary_email, $subject, $email_message, $headers)) { $query = mysql_query("...log successful send..."); if (mysql_error()!="") { $message .= '...display mysqlerror()..'; } $message .= '...create success message...'; } else { $query = mysql_query("...log failed send..."); if (mysql_error()!="") { $message .= '...display mysqlerror()..'; } $message .= '...create failed message...'; } return $message; } // END send_email() function //Check supplier info $query = mysql_query("...get suppliers info attached to order_id..."); if (mysql_num_rows($query) > 0) { while($row = mysql_fetch_object($query)) { if ($row->primary_email=="") { $message .= '...no email message...'; } else if ($row->notification_email=="") { $message .= '...no notifications message...'; } else { $message .= send_email($order_id); } } } else { $message .= '...no supplier matched message...'; } print $message; ?>

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Microsoft TechEd 2010 - Day 3 @ Bangalore

    - by sathya
    Microsoft TechEd 2010 - Day 3 @ Bangalore Sorry for my delayed post on day 3 because I had to travel from Blore to Chennai So I couldnt write for the past two days. On day 3 as usual we had lot of simultaneous tracks on various sessions. This day I choose the Your Data, Our Platform Track. It had sessions on the following 5 topics :   Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan Data Recovery / Consistency with CheckDB - by Vinod Kumar M Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Developing Data-tier Applications in Visual Studio 2010 - by Sanjay Nagamangalam This was one of the superb sessions i have attended. He explained all the concepts in detail with a demo. The important thing in this is there is something called Data-Tier application project which is newly introduced in this VS2010 with which we can manage all our data along with our application inside our VS itself. We can create DB,Tables,Procs,Views etc. here itself and once we deploy it creates a compressed file called .dacpac which stores all the changes in Table Schema,Created procs, etc. on to that single file which reduces our (developer's) effort in preparing the deployment scripts and giving it to the DBA. It also has some policy configurations which can be managed easily by checking some rules like in outlook. For Ex : IF the SQL Server Version > 10 then deploy else dont. This rule specifies that even if we try to deploy on SQL Server DB with version less than 10 It will not do it. And if we deploy some .dacpac to SQL server production db with the option upgrade DB with this dacpac once everything completes successfully it will say success else it rollsback to the prior version. Even if it gets deployed successfully and later @ a point of time you wish to revert it back to the prior version, you can go ahead and delete the existing dacpac version so that it reverts to the older version of the db changes. And for the good questions that were asked in the session T-Shirts were given. SQL Server Query Optimization, Execution and Debugging Query Performance - by Vinod Kumar M This one too was the best session. The speaker Vinod explained everything very much clearly. This was really useful session and you dont believe, as per my knowledge, in the total 3 days in the TechEd except the Keynote, for this session seats were full (House FULL)  People were even standing out to attend this session. Such a great one it was. The speaker did a deep dive in to the Query Plan section and showed which actually causes the problem. Its all about the thing that we need to understand about the execution of SQL server Queries. We think in a way and SQL Server never executes in that way. We need to understand that first. He also told about there might be two plans generated for a single query at a point of time because of parallel processors in the system. The Key is here in every query. There is something called Estimated Row Count and Actual Row Count in the query plan. If the estimated row count by SQL server tallies with the actual row count your performance will be awesome. He said some tweaks to achieve the same. After this as usual we had lunch SQL Server Utility - Its about more than 1 SQL Server - by Vinod Kumar Jagannathan This was more of a DBA's session. Am really sorry I was totally blank and I was not interested to attend this session and walked out to attend Migrating to the cloud by Harish Ranganathan (My favorite Speaker) but unfortunately that was some other persons session. There the speaker was telling about how to configure the connection strings in such a way that we can connect to the SQL Azure platform from our VS and also showed us how to deploy the same in to Windows Azure. In between there were lot of technical problems like laptop hang, user locked and he was switching between systems, also i came in the half so i wasnt able to listen that fully. In between, Since I got an MCTS certification they gave me T-Shirt with the lines 'Iam Certified. Are you?' and they asked me to wear that. If we wear that we might get spotted and they would give us some goodies  So on the 3rd day I was wearing that T-Shirt. I got spotted by the person Tarun who was coordinating things about the certification, and he was accompanied with a cameraman and they interviewed me about the certification and I was shown live in the Teched and was seen by 60000 live viewers of the TechEd. I was really happy on that. Data Recovery / Consistency with CheckDB - by Vinod Kumar M This was one of the best sessions too in the TechEd. This guy is really amazing. In front of us he crashed a DB and showed how to recover the same in 6 different ways for different no of failures. Showed about Different types of error msgs like : 823,824,825 msdb..suspect_pages DBCC CheckDB (different parameters to it) I am really waiting for his session to get uploaded live in the Teched Website. Here is his contact info If you wish to connect to him : Twitter : @vinodk_sql Website : www.ExtremeExperts.com Blog : http://blogs.sqlxml.org/vinodkumar Developing with SQL Server Spatial and Deep dive into Spatial Indexing - by Pinal Dave Pinal Dave is a King in SQL and he is a SQL MVP and he is the owner of SQLAuthority.com He took the session on Spatial Databases from the start. Showed about the different types of Spatial : Geometric and Geographic Geometric : x and y axis its a planar surface Geographic : Spherical surface with 3600  as the maximum which is used to represent the geographic points on the earth and easy to draw maps of different kinds. He had a lot of obstacles during his session like rain coming inside the hall, mic wires got bursted due to rain, Videos off on the display screens. In spite of that he asked the audience to come in the front rows and managed to take a good session without ppts and finally we got the displays on and he was showing demos on the same what he explained orally. That was really a fun filled informative session. He gave some books for the persons who asked good questions and answered well for his questions and I got one too  (It was a book on Data Mining - Wrox Publishers) And finally after all these things there was Keynote session for close of the TechEd. and we all assembled in a big hall where Mr.Ashok Soota, a man of age around 70  co-founder of Mindtree was called to give some lecture on his successes. He was explaining about his past and what all companies he switched and for what reasons and what are all his successes and what are all his failures and the learnings of him from his past failures. and his success and failures on his partnerships with the other concern. And there were some questions for him like What is your suggestion on young entrepreneur? How did you learn from past failures? What is reiterating your success? What is your suggestion on partnerships? How to choose partnerships? etc. And they said @ 7.30 Pm there would be a party night, but unfortunately i was not able to attend that because I had to catch my train and before that i had to pack things, so I started @ 7 itself. Thats it about the TechED!!! Stay tuned for further Technology updates.

    Read the article

  • CDN on Hosted Service in Windows Azure

    - by Shaun
    Yesterday I told Wang Tao, an annoying colleague sitting beside me, about how to make the static content enable the CDN in his website which had just been published on Windows Azure. The approach would be Move the static content, the images, CSS files, etc. into the blob storage. Enable the CDN on his storage account. Change the URL of those static files to the CDN URL. I think these are the very common steps when using CDN. But this morning I found that the new Windows Azure SDK 1.4 and new Windows Azure Developer Portal had just been published announced at the Windows Azure Blog. One of the new features in this release is about the CDN, which means we can enabled the CDN not only for a storage account, but a hosted service as well. Within this new feature the steps I mentioned above would be turned simpler a lot.   Enable CDN for Hosted Service To enable the CDN for a hosted service we just need to log on the Windows Azure Developer Portal. Under the “Hosted Services, Storage Accounts & CDN” item we will find a new menu on the left hand side said “CDN”, where we can manage the CDN for storage account and hosted service. As we can see the hosted services and storage accounts are all listed in my subscriptions. To enable a CDN for a hosted service is veru simple, just select a hosted service and click the New Endpoint button on top. In this dialog we can select the subscription and the storage account, or the hosted service we want the CDN to be enabled. If we selected the hosted service, like I did in the image above, the “Source URL for the CDN endpoint” will be shown automatically. This means the windows azure platform will make all contents under the “/cdn” folder as CDN enabled. But we cannot change the value at the moment. The following 3 checkboxes next to the URL are: Enable CDN: Enable or disable the CDN. HTTPS: If we need to use HTTPS connections check it. Query String: If we are caching content from a hosted service and we are using query strings to specify the content to be retrieved, check it. Just click the “Create” button to let the windows azure create the CDN for our hosted service. The CDN would be available within 60 minutes as Microsoft mentioned. My experience is that about 15 minutes the CDN could be used and we can find the CDN URL in the portal as well.   Put the Content in CDN in Hosted Service Let’s create a simple windows azure project in Visual Studio with a MVC 2 Web Role. When we created the CDN mentioned above the source URL of CDN endpoint would be under the “/cdn” folder. So in the Visual Studio we create a folder under the website named “cdn” and put some static files there. Then all these files would be cached by CDN if we use the CDN endpoint. The CDN of the hosted service can cache some kind of “dynamic” result with the Query String feature enabled. We create a controller named CdnController and a GetNumber action in it. The routed URL of this controller would be /Cdn/GetNumber which can be CDN-ed as well since the URL said it’s under the “/cdn” folder. In the GetNumber action we just put a number value which specified by parameter into the view model, then the URL could be like /Cdn/GetNumber?number=2. 1: using System; 2: using System.Collections.Generic; 3: using System.Linq; 4: using System.Web; 5: using System.Web.Mvc; 6:  7: namespace MvcWebRole1.Controllers 8: { 9: public class CdnController : Controller 10: { 11: // 12: // GET: /Cdn/ 13:  14: public ActionResult GetNumber(int number) 15: { 16: return View(number); 17: } 18:  19: } 20: } And we add a view to display the number which is super simple. 1: <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<int>" %> 2:  3: <asp:Content ID="Content1" ContentPlaceHolderID="TitleContent" runat="server"> 4: GetNumber 5: </asp:Content> 6:  7: <asp:Content ID="Content2" ContentPlaceHolderID="MainContent" runat="server"> 8:  9: <h2>The number is: <% 1: : Model.ToString() %></h2> 10:  11: </asp:Content> Since this action is under the CdnController the URL would be under the “/cdn” folder which means it can be CDN-ed. And since we checked the “Query String” the content of this dynamic page will be cached by its query string. So if I use the CDN URL, http://az25311.vo.msecnd.net/GetNumber?number=2, the CDN will firstly check if there’s any content cached with the key “GetNumber?number=2”. If yes then the CDN will return the content directly; otherwise it will connect to the hosted service, http://aurora-sys.cloudapp.net/Cdn/GetNumber?number=2, and then send the result back to the browser and cached in CDN. But to be notice that the query string are treated as string when used by the key of CDN element. This means the URLs below would be cached in 2 elements in CDN: http://az25311.vo.msecnd.net/GetNumber?number=2&page=1 http://az25311.vo.msecnd.net/GetNumber?page=1&number=2 The final step is to upload the project onto azure. Test the Hosted Service CDN After published the project on azure, we can use the CDN in the website. The CDN endpoint we had created is az25311.vo.msecnd.net so all files under the “/cdn” folder can be requested with it. Let’s have a try on the sample.htm and c_great_wall.jpg static files. Also we can request the dynamic page GetNumber with the query string with the CDN endpoint. And if we refresh this page it will be shown very quickly since the content comes from the CDN without MCV server side process. We style of this page was missing. This is because the CSS file was not includes in the “/cdn” folder so the page cannot retrieve the CSS file from the CDN URL.   Summary In this post I introduced the new feature in Windows Azure CDN with the release of Windows Azure SDK 1.4 and new Developer Portal. With the CDN of the Hosted Service we can just put the static resources under a “/cdn” folder so that the CDN can cache them automatically and no need to put then into the blob storage. Also it support caching the dynamic content with the Query String feature. So that we can cache some parts of the web page by using the UserController and CDN. For example we can cache the log on user control in the master page so that the log on part will be loaded super-fast. There are some other new features within this release you can find here. And for more detailed information about the Windows Azure CDN please have a look here as well.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

< Previous Page | 239 240 241 242 243 244 245 246 247 248 249 250  | Next Page >