Search Results

Search found 14448 results on 578 pages for 'schema org'.

Page 165/578 | < Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >

  • Auditing database source code changes

    - by John Paul Cook
    Auditing changes to database source code can be easily implemented with a database trigger. Here’s a simple implementation of stored procedure auditing using an audit table and a database trigger. It assumes that a schema named Audit already exists. CREATE TABLE Audit . AuditStoredProcedures ( DatabaseName sysname , ObjectName sysname , LoginName sysname , ChangeDate datetime , EventType sysname , EventDataXml xml ); Notice the EventDataXml column. Using an nvarchar column to store the source text...(read more)

    Read the article

  • Oracle GoldenGate: Knowledge Document Series Post #1

    - by Doug Reid
    0 false 18 pt 18 pt 0 0 false false false /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:"Times New Roman"; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin;} Over the course of the next several weeks, the Oracle GoldenGate team will be posting highlights from the knowledge base on My Oracle Support. The intent is to bring greater awareness to our user community of some fantastic best practices documents that are available to anyone with a valid CSI.   These documents were originally created for our internal teams, but the content was so good and useful that we have made them available externally for our user community.  The first in our series is the Oracle GoldenGate database Schema Profile check script (Doc ID 1296168.1). “This script is intended to query the database by schema to identify current configuration and identify any unsupported data types or types that may need special considerations for Oracle GoldenGate in an Oracle environment. This is the Oracle database profile script.  Added check for deferred constraints. Deferred constraints may cause ADD TRANDATA to select the wrong column for logging. Use KEYCOLS for tables with deferred constraints.” Click here to view the document  Check back weekly for additional new postings.

    Read the article

  • Hibernate error: cannot resolve table

    - by Roman
    I'm trying to make work the example from hibernate reference. I've got simple table Pupil with id, name and age fields. I've created correct (as I think) java-class for it according to all java-beans rules. I've created configuration file - hibernate.cfg.xml, just like in the example from reference. I've created hibernate mapping for one class Pupil, and here is the error occured. <hibernate-mapping> <class name="Pupil" table="pupils"> ... </class> </hibernate-mapping> table="pupils" is red in my IDE and I see message "cannot resolve table pupils". I've also founded very strange note in reference which says that most users fail with the same problem trying to run the example. Ah.. I'm very angry with this example.. IMHO if authors know that there is such problem they should add some information about it. But, how should I fix it? I don't want to deal with Ant here and with other instruments used in example. I'm using MySql 5.0, but I think it doesn't matter. UPD: source code Pupil.java - my persistent class package domain; public class Pupil { private Integer id; private String name; private Integer age; protected Pupil () { } public Pupil (String name, int age) { this.age = age; this.name = name; } public Integer getId () { return id; } public void setId (Integer id) { this.id = id; } public String getName () { return name; } public void setName (String name) { this.name = name; } public Integer getAge () { return age; } public void setAge (Integer age) { this.age = age; } public String toString () { return "Pupil [ name = " + name + ", age = " + age + " ]"; } } Pupil.hbm.xml is mapping for this class <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <hibernate-mapping package="domain" > <class name="Pupil" table="pupils"> <id name="id"> <generator class="native" /> </id> <property name="name" not-null="true"/> <property name="age"/> </class> </hibernate-mapping> hibernate.cfg.xml - configuration for hibernate <hibernate-configuration> <session-factory> <!-- Database connection settings --> <property name="connection.driver_class">com.mysql.jdbc.Driver</property> <property name="connection.url">jdbc:mysql://localhost/hbm_test</property> <property name="connection.username">root</property> <property name="connection.password">root</property> <property name="connection.pool_size">1</property> <property name="dialect">org.hibernate.dialect.MySQL5Dialect</property> <property name="current_session_context_class">thread</property> <property name="show_sql">true</property> <mapping resource="domain/Pupil.hbm.xml"/> </session-factory> </hibernate-configuration> HibernateUtils.java package utils; import org.hibernate.SessionFactory; import org.hibernate.HibernateException; import org.hibernate.cfg.Configuration; public class HibernateUtils { private static final SessionFactory sessionFactory; static { try { sessionFactory = new Configuration ().configure ().buildSessionFactory (); } catch (HibernateException he) { System.err.println (he); throw new ExceptionInInitializerError (he); } } public static SessionFactory getSessionFactory () { return sessionFactory; } } Runner.java - class for testing hibernate import org.hibernate.Session; import java.util.*; import utils.HibernateUtils; import domain.Pupil; public class Runner { public static void main (String[] args) { Session s = HibernateUtils.getSessionFactory ().getCurrentSession (); s.beginTransaction (); List pups = s.createQuery ("from Pupil").list (); for (Object obj : pups) { System.out.println (obj); } s.getTransaction ().commit (); HibernateUtils.getSessionFactory ().close (); } } My libs: antlr-2.7.6.jar, asm.jar, asm-attrs.jar, cglib-2.1.3.jar, commons-collections-2.1.1.jar, commons-logging-1.0.4.jar, dom4j-1.6.1.jar, hibernate3.jar, jta.jar, log4j-1.2.11.jar, mysql-connector-java-5.1.7-bin.jar Compile error: cannot resolve table pupils

    Read the article

  • Northwind now available on SQL Azure

    - by jamiet
    Two weeks ago I made available a copy of [AdventureWorks2012] on SQL Azure and published credentials so that anyone from the SQL community could connect up and experience SQL Azure, probably for the first time. One of the (somewhat) popular requests thereafter was to make the venerable Northwind database available too so I am pleased to say that as of right now, Northwind is up there too. You will notice immediately that all of the Northwind tables (and the stored procedures and views too) have been moved into a schema called [Northwind] – this was so that they could be easily differentiated from the existing [AdventureWorks2012] objects. I used an SQL Server Data Tools (SSDT) project to publish the schema and data up to this SQL Azure database; if you are at all interested in poking around that SSDT project then I have made it available on Codeplex for your convenience under the MS-PL license – go and get it from https://northwindssdt.codeplex.com/. Using SSDT proved particularly useful as it alerted me to some aspects of Northwind that were not compatible with SQL Azure, namely that five of the tables did not have clustered indexes: The beauty of using SSDT is that I am alerted to these issues before I even attempt a connection to SQL Azure. Pretty cool, no? Fixing this situation was of course very easy, I simply changed the following primary keys from being nonclustered to clustered: [PK_Region] [PK_CustomerDemographics] [PK_EmployeeTerritories] [PK_Territories] [PK_CustomerCustomerDemo]   If you want to connect up then here are the credentials that you will need: Server mhknbn2kdz.database.windows.net Database AdventureWorks2012 User sqlfamily Password sqlf@m1ly You will need SQL Server Management Studio (SSMS) 2008R2 installed in order to connect or alternatively simply use this handy website: https://mhknbn2kdz.database.windows.net which provides a web interface to a SQL Azure server. Do remember that hosting this database is not free so if you find that you are making use of it please help to keep it available by visiting Paypal and donating any amount at all to [email protected]. To make this easy you can simply hit this link and the details will be completed for you – all you have to do is login and hit the “Send” button. If you are already a PayPal member then it should take you all of about 20 seconds! I hope this is useful to some of you folks out there. Don’t forget that we also have more data up there than in the conventional [AdventureWorks2012], read more at Big AdventureWorks2012. @Jamiet  AdventureWorks on Azure - Provided by the SQL Server community, for the SQL Server community!

    Read the article

  • SQL Server v.Next (Denali) : Metadata enhancements

    - by AaronBertrand
    In my previous job, we had several cases where schema changes or incorrect developer assumptions in the middle tier or application logic would lead to type mismatches. We would have a stored procedure that returns a BIT column, but then change the procedure to have something like CASE WHEN <condition> THEN 1 ELSE 0 END. In this case SQL Server would return an INT as a catch-all, and if .NET was expecting a boolean, BOOM. Wouldn't it be nice if the application could check the result set of the...(read more)

    Read the article

  • Execution plan warnings–The final chapter

    - by Dave Ballantyne
    In my previous posts (here and here), I showed examples of some of the execution plan warnings that have been added to SQL Server 2012.  There is one other warning that is of interest to me : “Unmatched Indexes”. Firstly, how do I know this is the final one ?  The plan is an XML document, right ? So that means that it can have an accompanying XSD.  As an XSD is a schema definition, we can poke around inside it to find interesting things that *could* be in the final XML file. The showplan schema is stored in the folder Microsoft SQL Server\110\Tools\Binn\schemas\sqlserver\2004\07\showplan and by comparing schemas over releases you can get a really good idea of any new functionality that has been added. Here is the section of the Sql Server 2012 showplan schema that has been interesting me so far : <xsd:complexType name="AffectingConvertWarningType"> <xsd:annotation> <xsd:documentation>Warning information for plan-affecting type conversion</xsd:documentation> </xsd:annotation> <xsd:sequence> <!-- Additional information may go here when available --> </xsd:sequence> <xsd:attribute name="ConvertIssue" use="required"> <xsd:simpleType> <xsd:restriction base="xsd:string"> <xsd:enumeration value="Cardinality Estimate" /> <xsd:enumeration value="Seek Plan" /> <!-- to be extended here --> </xsd:restriction> </xsd:simpleType> </xsd:attribute> <xsd:attribute name="Expression" type ="xsd:string" use="required" /></xsd:complexType><xsd:complexType name="WarningsType"> <xsd:annotation> <xsd:documentation>List of all possible iterator or query specific warnings (e.g. hash spilling, no join predicate)</xsd:documentation> </xsd:annotation> <xsd:choice minOccurs="1" maxOccurs="unbounded"> <xsd:element name="ColumnsWithNoStatistics" type="shp:ColumnReferenceListType" minOccurs="0" maxOccurs="1" /> <xsd:element name="SpillToTempDb" type="shp:SpillToTempDbType" minOccurs="0" maxOccurs="unbounded" /> <xsd:element name="Wait" type="shp:WaitWarningType" minOccurs="0" maxOccurs="unbounded" /> <xsd:element name="PlanAffectingConvert" type="shp:AffectingConvertWarningType" minOccurs="0" maxOccurs="unbounded" /> </xsd:choice> <xsd:attribute name="NoJoinPredicate" type="xsd:boolean" use="optional" /> <xsd:attribute name="SpatialGuess" type="xsd:boolean" use="optional" /> <xsd:attribute name="UnmatchedIndexes" type="xsd:boolean" use="optional" /> <xsd:attribute name="FullUpdateForOnlineIndexBuild" type="xsd:boolean" use="optional" /></xsd:complexType> I especially like the “to be extended here” comment,  high hopes that we will see more of these in the future.   So “Unmatched Indexes” was a warning that I couldn’t get and many thanks must go to Fabiano Amorim (b|t) for showing me the way.   Filtered indexes were introduced in Sql Server 2008 and are really useful if you only need to index only a portion of the data within a table.  However,  if your SQL code uses a variable as a predicate on the filtered data that matches the filtered condition, then the filtered index cannot be used as, naturally,  the value in the variable may ( and probably will ) change and therefore will need to read data outside the index.  As an aside,  you could use option(recompile) here , in which case the optimizer will build a plan specific to the variable values and use the filtered index,  but that can bring about other problems.   To demonstrate this warning, we need to generate some test data :   DROP TABLE #TestTab1GOCREATE TABLE #TestTab1 (Col1 Int not null, Col2 Char(7500) not null, Quantity Int not null)GOINSERT INTO #TestTab1 VALUES (1,1,1),(1,2,5),(1,2,10),(1,3,20), (2,1,101),(2,2,105),(2,2,110),(2,3,120)GO and then add a filtered index CREATE INDEX ixFilter ON #TestTab1 (Col1)WHERE Quantity = 122 Now if we execute SELECT COUNT(*) FROM #TestTab1 WHERE Quantity = 122 We will see the filtered index being scanned But if we parameterize the query DECLARE @i INT = 122SELECT COUNT(*) FROM #TestTab1 WHERE Quantity = @i The plan is very different a table scan, as the value of the variable used in the predicate can change at run time, and also we see the familiar warning triangle. If we now look at the properties pane, we will see two pieces of information “Warnings” and “UnmatchedIndexes”. So, handily, we are being told which filtered index is not being used due to parameterization.

    Read the article

  • breadcrumb dilemma -SEO impact

    - by HaCos
    i am updating the breadcrumb module of an commerce website, implementing microdata (schema.org). My dilemma is about showing last page: a.product name on breadcrumb or not? b.Should that be active link to current page or not? eg: http://www.google.com/webmasters/tools/richsnippets?url=http%3A%2F%2Fwww.urbanspoon.com%2Fr%2F23%2F1600592%2Frestaurant%2FPoint-Breeze%2FAlma-Pan-Latin-Kitchen-Pittsburgh urbanspoon example doesnt link last page, but is this right?

    Read the article

  • Rounding functions in DAX

    - by Marco Russo (SQLBI)
    Today I prepared a table of the many rounding functions available in DAX (yes, it’s part of the book we’re writing), so that I have a complete schema of the better function to use, depending on the round operation I need to do. Here is the list of functions used and then the results shown for a relevant set of values. FLOOR = FLOOR( Tests[Value], 0.01 ) TRUNC = TRUNC( Tests[Value], 2 ) ROUNDDOWN = ROUNDDOWN( Tests[Value], 2 ) MROUND = MROUND( Tests[Value], 0.01 ) ROUND = ROUND( Tests[Value], 2 )...(read more)

    Read the article

  • Function Folding in #PowerQuery

    - by Darren Gosbell
    Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2014/05/16/function-folding-in-powerquery.aspxLooking at a typical Power Query query you will noticed that it's made up of a number of small steps. As an example take a look at the query I did in my previous post about joining a fact table to a slowly changing dimension. It was roughly built up of the following steps: Get all records from the fact table Get all records from the dimension table do an outer join between these two tables on the business key (resulting in an increase in the row count as there are multiple records in the dimension table for each business key) Filter out the excess rows introduced in step 3 remove extra columns that are not required in the final result set. If Power Query was to execute a query like this literally, following the same steps in the same order it would not be overly efficient. Particularly if your two source tables were quite large. However Power Query has a feature called function folding where it can take a number of these small steps and push them down to the data source. The degree of function folding that can be performed depends on the data source, As you might expect, relational data sources like SQL Server, Oracle and Teradata support folding, but so do some of the other sources like OData, Exchange and Active Directory. To explore how this works I took the data from my previous post and loaded it into a SQL database. Then I converted my Power Query expression to source it's data from that database. Below is the resulting Power Query which I edited by hand so that the whole thing can be shown in a single expression: let     SqlSource = Sql.Database("localhost", "PowerQueryTest"),     BU = SqlSource{[Schema="dbo",Item="BU"]}[Data],     Fact = SqlSource{[Schema="dbo",Item="fact"]}[Data],     Source = Table.NestedJoin(Fact,{"BU_Code"},BU,{"BU_Code"},"NewColumn"),     LeftJoin = Table.ExpandTableColumn(Source, "NewColumn"                                   , {"BU_Key", "StartDate", "EndDate"}                                   , {"BU_Key", "StartDate", "EndDate"}),     BetweenFilter = Table.SelectRows(LeftJoin, each (([Date] >= [StartDate]) and ([Date] <= [EndDate])) ),     RemovedColumns = Table.RemoveColumns(BetweenFilter,{"StartDate", "EndDate"}) in     RemovedColumns If the above query was run step by step in a literal fashion you would expect it to run two queries against the SQL database doing "SELECT * …" from both tables. However a profiler trace shows just the following single SQL query: select [_].[BU_Code],     [_].[Date],     [_].[Amount],     [_].[BU_Key] from (     select [$Outer].[BU_Code],         [$Outer].[Date],         [$Outer].[Amount],         [$Inner].[BU_Key],         [$Inner].[StartDate],         [$Inner].[EndDate]     from [dbo].[fact] as [$Outer]     left outer join     (         select [_].[BU_Key] as [BU_Key],             [_].[BU_Code] as [BU_Code2],             [_].[BU_Name] as [BU_Name],             [_].[StartDate] as [StartDate],             [_].[EndDate] as [EndDate]         from [dbo].[BU] as [_]     ) as [$Inner] on ([$Outer].[BU_Code] = [$Inner].[BU_Code2] or [$Outer].[BU_Code] is null and [$Inner].[BU_Code2] is null) ) as [_] where [_].[Date] >= [_].[StartDate] and [_].[Date] <= [_].[EndDate] The resulting query is a little strange, you can probably tell that it was generated programmatically. But if you look closely you'll notice that every single part of the Power Query formula has been pushed down to SQL Server. Power Query itself ends up just constructing the query and passing the results back to Excel, it does not do any of the data transformation steps itself. So now you can feel a bit more comfortable showing Power Query to your less technical Colleagues knowing that the tool will do it's best fold all the  small steps in Power Query down the most efficient query that it can against the source systems.

    Read the article

  • Effectiveness and Efficiency

    - by Daniel Moth
    In the professional environment, i.e. at work, I am always seeking personal growth and to be challenged. The result is that my assignments, my work list, my tasks, my goals, my commitments, my [insert whatever word resonates with you] keep growing (in scope and desired impact). Which in turn means I have to keep finding new ways to deliver more value, while not falling into the trap of working more hours. To do that I continuously evaluate both my effectiveness and my efficiency. EFFECTIVENESS The first thing I check is my effectiveness: Am I doing the right things? Am I focusing too much on unimportant things? Am I spending more time doing stuff that is important to my team/org/division/business/company, or am I spending it on stuff that is important to me and that I enjoy doing? Am I valuing activities that maybe I have outgrown and should be delegated to others who are at a stage I have surpassed (in Microsoft speak: is the work I am doing level appropriate or am I still operating at the previous level)? Notice how the answers to those questions change over time and due to certain events, so I have to remind myself to revisit them frequently. Events that force me to re-examine them are: change of role, change of team/org/etc, change of direction of team/org/etc, re-org, new hires on the team that take on some of the work I did, personal promotion, change of manager... and if none of those events has occurred since the last annual review, I ask myself those at each annual review anyway. If you think you are not being effective at work, make a list of the stuff that you do and start tracking where your time goes. In parallel, have a discussion with your manager about where they think your time should go. Ultimately your time is finite and hence it is your most precious investment, don't waste it. If your management doesn't value as highly what you spend your time on, then either convince your management, or stop spending your time on it, or find different management: Lead, Follow, or get out of the way! That's my view on effectiveness. You have to fix that before moving to being efficient, or you may end up being very efficient at stuff that nobody wants you to be doing in the first place. For example, you may be spending your time writing blog posts and becoming better and faster at it all the time. If your manager thinks that is not even part of your job description, you are wasting your time to satisfy your inner desires. Nobody can help you with your effectiveness other than your management chain and your management peers - they are the judges of it. EFFICIENCY The second thing I check is my efficiency: Am I doing things right? For me, doing things right means that I deliver the same quality of work faster [than what I used to, and than my peers, and than expected of me]. The result is that I can achieve more [than what I used to, and than my peers, and than expected of me]. Notice how the efficiency goal is a more portable one. If, by whatever criteria, you think you are the best at [insert your own skill here], this can change at two events: because you have new colleagues (who are potentially better than your older ones), and it can change with a change of manager (who has potentially higher expectations). That's about it. Once you are efficient at something, you carry that with you... All you need to really be doing here is, when taking on new kinds of work that you haven't done before, try a few approaches and devise a system so that you can become efficient at this new activity too... Just keep "collecting" stuff that you are efficient at. If you think you are not being efficient at something, break it down: What are the steps you take to complete that task? How long do you spend on each step? Talk to others about what steps they take, to see if you can optimize some steps away or trade them for better steps, or just learn how to complete a step faster. Have a system for every task you take so that you can have repeatable success. That's my view on efficiency. You have to fix it so that you can free up time to do more. When you plan a route from A to B - all else being equal - you try to get there as fast as possible so why would you not want to do that with your everyday work? For example, imagine you are inefficient at processing email: You spend more time than necessary dealing with email, and you still end up with dropped email threads and with slower response times than others. How can you improve? Talk to someone that you think is good at this, understand their system (e.g. here is my email processing system) and come up with one that works for you. Parting Thoughts Are you considered, by your colleagues and manager, an effective and efficient person at your workplace? If you are, what would you change if you were asked by your management to do the job of two people? Seriously, think about that! Your immediate reaction may be "that is not possible", but it actually is. You just have to re-assess what things that were previously important will now stop being important, by discussing them with your management and reaching agreement on relative priorities. For example, stuff that was previously on your plate may now have to be delegated or dropped. Where you thought you were efficient, maybe now you have to find an even faster path to completion, perhaps keeping in mind that Perfect is the Enemy of “Good Enough”. My personal experience (from both observing others and from my own reflection) is that when folks are struggling to keep up at work it is because of two reasons: They are investing energy in stuff that they enjoy doing which the business regards as having a lower priority than a lot of other things on their plate. They are completing tasks to a level of higher quality than what is required (due to personal pride) missing the big picture which almost always mandates completing three tasks at good enough quality than knocking only one of them out of the park while the other two come in late or not at all. There is a lot of content on the web, so I strongly encourage you to use your favorite search engine to read other views on effectiveness and efficiency (Bing, Google). Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Rounding functions in DAX

    - by Marco Russo (SQLBI)
    Today I prepared a table of the many rounding functions available in DAX (yes, it’s part of the book we’re writing), so that I have a complete schema of the better function to use, depending on the round operation I need to do. Here is the list of functions used and then the results shown for a relevant set of values. FLOOR = FLOOR( Tests[Value], 0.01 ) TRUNC = TRUNC( Tests[Value], 2 ) ROUNDDOWN = ROUNDDOWN( Tests[Value], 2 ) MROUND = MROUND( Tests[Value], 0.01 ) ROUND = ROUND( Tests[Value], 2 )...(read more)

    Read the article

  • Why ????? is displayed instead of non-english characters?

    - by smhnaji
    I first created a simple HTML page that uses UTF-8 as its character encoding. Then I moved the HTML content as a view in codeigniter and it was still ok (I had used non-english characters that were ok as always) I added a simple dynamic functionality (there is a contact us form in it that emails users feedback to site admin). Please note that the characters were ok at localhost (which is a LAMP server running on Ubuntu 12.04 LTS) Strange is that when I uploaded the app to server, all persian characters are shown as ???? (For example ??? (which means Name) is shown ??? and so so...) I have not even connected to mysql or any other DBMS. It's the only page in the website (it's more an under construction page) and nothing else has been used in it. Maybe I should state that I have also used session library to thank the user after his feedback was sent to admins, nothing else. I have really no idea about the problem. UPDATE Now I can see that the problem is only with cPanel. On Directadmin I can see that everything is normal. Both Chromium and Firefox DO use UTF-8 as page's character encoding. URL is http://WEBSITE.COM/dmf/dynamic/ (dmf is the abbreviation of the project name!). There is nothing non-english in the URL. The page's code is as follows: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <meta http-equiv="Content-Type" content="text/html; charset=utf-8" /> <title>??? ???????</title> <link rel="stylesheet" type="text/css" href="<?php echo base_url('template/css/style.css'); ?>" /> <!-- 1. jquery library --> <script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js"> </script> <!-- 2. flowplayer --> <script src="http://releases.flowplayer.org/5.1.1/flowplayer.min.js"></script> <!-- 3. skin --> <link rel="stylesheet" type="text/css" href="http://releases.flowplayer.org/5.1.1/skin/minimalist.css" /> </head> <body> <div id="wrapper"> <header> <h1>??? ???????</h1> </header> <section id="box-container"> <?php echo form_open('contact', "id='contact-us'"); echo form_fieldset('???? ?? ??'); if ($this->session->userdata('mailsent')) { echo '<div>??????? ???? ??? ????? ??</div>'; $this->session->sess_destroy(); } echo '<input tabindex="1" id="name-in" value="???" type="text" name="name"/> <input tabindex="2" id="mail-in" value="?????" type="email" name="email"/> <textarea tabindex="3" id="content-in" name="message">???????</textarea> <input tabindex="4" id="submit" type="submit" value="?????" />'; echo '<div class="clear"></div>'; echo form_fieldset_close(); echo form_close(); ?> <div id="sms-comp"> <h2>?????? ??????</h2> <p> <span id="comp-title">?? ??? ????</span> ???? ??????? ???? ??? </p> </div> <div id="last-program"> <h2>?????? ????? ??????</h2> <div class="flowplayer"> <video id="my_video_1" width="212" height="126" poster="<?php echo base_url('template/images/img.jpg'); ?>" controls="controls" src="http://archive.org/download/Pbtestfilemp4videotestmp4/video_test.ogv" type='video/mp4'> </video> </div> </div> <div class="clear"></div> </section> </div> <footer> ????? ? ????? : <a href="http://powered-by.com/" target="_blank">????? ???</a> </footer> </body> </html>

    Read the article

  • XML over HTTP with JMS and Spring

    - by Will Sumekar
    I have a legacy HTTP server where I need to send an XML file over HTTP request (POST) using Java (not browser) and the server will respond with another XML in its HTTP response. It is similar to Web Service but there's no WSDL and I have to follow the existing XML structure to construct my XML to be sent. I have done a research and found an example that matches my requirement here. The example uses HttpClient from Apache Commons. (There are also other examples I found but they use java.net networking package (like URLConnection) which is tedious so I don't want to use them). But it's also my requirement to use Spring and JMS. I know from Spring's reference that it's possible to combine HttpClient, JMS and Spring. My question is, how? Note that it's NOT in my requirement to use HttpClient. If you have a better suggestion, I'm welcome. Appreciate it. For your reference, here's the XML-over-HTTP example I've been talking about: /* * $Header: * $Revision$ * $Date$ * ==================================================================== * * Copyright 2002-2004 The Apache Software Foundation * * Licensed under the Apache License, Version 2.0 (the "License"); * you may not use this file except in compliance with the License. * You may obtain a copy of the License at * * http://www.apache.org/licenses/LICENSE-2.0 * * Unless required by applicable law or agreed to in writing, software * distributed under the License is distributed on an "AS IS" BASIS, * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. * See the License for the specific language governing permissions and * limitations under the License. * ==================================================================== * * This software consists of voluntary contributions made by many * individuals on behalf of the Apache Software Foundation. For more * information on the Apache Software Foundation, please see * <http://www.apache.org/>. * * [Additional notices, if required by prior licensing conditions] * */ import java.io.File; import java.io.FileInputStream; import org.apache.commons.httpclient.HttpClient; import org.apache.commons.httpclient.methods.InputStreamRequestEntity; import org.apache.commons.httpclient.methods.PostMethod; /** * * This is a sample application that demonstrates * how to use the Jakarta HttpClient API. * * This application sends an XML document * to a remote web server using HTTP POST * * @author Sean C. Sullivan * @author Ortwin Glück * @author Oleg Kalnichevski */ public class PostXML { /** * * Usage: * java PostXML http://mywebserver:80/ c:\foo.xml * * @param args command line arguments * Argument 0 is a URL to a web server * Argument 1 is a local filename * */ public static void main(String[] args) throws Exception { if (args.length != 2) { System.out.println( "Usage: java -classpath <classpath> [-Dorg.apache.commons."+ "logging.simplelog.defaultlog=<loglevel>]" + " PostXML <url> <filename>]"); System.out.println("<classpath> - must contain the "+ "commons-httpclient.jar and commons-logging.jar"); System.out.println("<loglevel> - one of error, "+ "warn, info, debug, trace"); System.out.println("<url> - the URL to post the file to"); System.out.println("<filename> - file to post to the URL"); System.out.println(); System.exit(1); } // Get target URL String strURL = args[0]; // Get file to be posted String strXMLFilename = args[1]; File input = new File(strXMLFilename); // Prepare HTTP post PostMethod post = new PostMethod(strURL); // Request content will be retrieved directly // from the input stream // Per default, the request content needs to be buffered // in order to determine its length. // Request body buffering can be avoided when // content length is explicitly specified post.setRequestEntity(new InputStreamRequestEntity( new FileInputStream(input), input.length())); // Specify content type and encoding // If content encoding is not explicitly specified // ISO-8859-1 is assumed post.setRequestHeader( "Content-type", "text/xml; charset=ISO-8859-1"); // Get HTTP client HttpClient httpclient = new HttpClient(); // Execute request try { int result = httpclient.executeMethod(post); // Display status code System.out.println("Response status code: " + result); // Display response System.out.println("Response body: "); System.out.println(post.getResponseBodyAsString()); } finally { // Release current connection to the connection pool // once you are done post.releaseConnection(); } } }

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

  • Juju bootstrap fails for Windows Azure

    - by tomconte
    I am trying to play with Juju on Windows Azure. I did the configuration part, however the boostrap fails on a network configuration error: 2013-10-29 10:08:49 ERROR juju supercommand.go:282 PUT request failed: BadRequest - XML Schema validation error in network configuration at line 75,18. (http code 400: Bad Request) How can I troubleshoot? Can I find somewhere the XML file to examine it? Thanks! Thomas.

    Read the article

  • JDK bug migration milestone: JIRA now the system of record

    - by darcy
    I'm pleased to announce the OpenJDK bug database migration project has reached a significant milestone: the JDK has switched from the legacy Sun "bugtraq" system to a new internal JIRA instance as the system of record for our bug tracking. This completes the initial phase of the previously described plan of getting OpenJDK onto an externally visible and writable bug tracker. The identities contained in the current system include recognized OpenJDK contributors. The bug migration effort to date has been sizable in multiple dimensions. There are around 140,000 distinct issues imported into the JDK project of the JIRA instance, nearly 165,000 if backport issues to track multiple-release information are included. Separately, the Code Tools OpenJDK project has its own JIRA project populated with several thousands existing bugs. Once the OpenJDK JIRA instance is externalized, approved OpenJDK projects will be able to request the creation of a JIRA project for issue tracking. There are many differences in the schema used to model bugs between the legacy bug system and the schema for the new JIRA projects. We've favored simplifications to the existing system where possible and, after much discussion, we've settled on five main states for the OpenJDK JIRA projects: New Open In progress Resolved Closed The Open and In-progress states can have a substate Understanding field set to track whether the issues has its "Cause Known" or "Fix understood". In the closed state, a Verification field can indicate whether a fix has been verified, unverified, or if the fix has failed. At the moment, there will be very little externally visible difference between JIRA for OpenJDK and the legacy system it replaces. One difference is that bug numbers for newly filed issues in the JIRA JDK project will be 8000000 and above. If you are working with JDK Hg repositories, update any local copies of jcheck to the latest version which recognizes this expanded bug range. (The bug numbers of existing issues have been preserved on the import into JIRA). Relatively soon, we plan for the pages published on bugs.sun.com to be generated from information in JIRA rather than in the legacy system. When this occurs, there will be some differences in the page display and the terminology used will be revised to reflect JIRA usage, such as referring to the "component/subcomponent" of an issue rather than its "category". The exact timing of this transition will be announced when it is known. We don't currently have a firm timeline for externalization of the JIRA system. Updates will be provided as they become available. However, that is unlikely to happen before JavaOne next week!

    Read the article

  • Getting started with tuning your SOA/BPM database using AWR

    - by Mark Nelson
    In order to continue to get good performance from your SOA or BPM 11g server, you will want to periodically check your database – the one you are storing your SOAINFRA schema in – to see if there are any performance issues there. This article provides a very brief introduction to the use of the Automatic Workload Repository (AWR) in the Oracle Database and what to look for in the reports for your SOA/BPM environment. READ MORE >>

    Read the article

  • Re-generating SQL Server Logins

    SQL Server stores all login information on security catalog system tables. By querying the system tables, SQL statements can be re-generated to recover logins, including password, default schema/database, server/database role assignments, and object level permissions. A comprehensive permission report can also be produced by combining information from the system metadata. The Future of SQL Server Monitoring "Being web-based, SQL Monitor 2.0 enables you to check on your servers from almost any location" Jonathan Allen.Try SQL Monitor now.

    Read the article

  • What strategy to use when starting in a new project with no documentations?

    - by Amir Rezaei
    Which is the best why to go when there are no documentations? For example how do you learn business rules? I have done the following steps: Since we are using a ORM tool I have printed a copy of database schema where I can se relations between objects. I have made a list of short names/table names that I will get explained. The project is client/server enterprise application using MVVM pattern.

    Read the article

  • Google I/O 2012 - Knowledge-Based Application Design Patterns

    Google I/O 2012 - Knowledge-Based Application Design Patterns Shawn Simister In this talk we'll look at emerging design patterns for building web applications that take advantage of large-scale, structured data. We'll look at open datasets like Wikipedia and Freebase as well as structured markup like Schema.org and RDFa to see what new types of applications these technologies open up for developers. For all I/O 2012 sessions, go to developers.google.com From: GoogleDevelopers Views: 1 0 ratings Time: 56:55 More in Science & Technology

    Read the article

  • Oracle BPM and Open Data integration development

    - by drrwebber
    Rapidly developing Oracle BPM application solutions with data source integration previously required significant Java and JDeveloper skills. Now using open source tools for open data development significantly reduces the coding needed.  Key tasks can be performed with visual drag and drop designing combined with menu selections entry and automatic form generation directly from XSD schema definitions. The architecture used is extremely lightweight, portable, open platform and scalable allowing integration with a variety of Oracle and non-Oracle data sources and systems. Two videos available on YouTube walk through the process at both an introductory conceptual level and then a deep dive into the programming needed using JDeveloper, Oracle BPM composer and Oracle WLS (WebLogic Server) along with the CAM editor and Open-XDX open source tools. Also available are coding samples and resources from the GitHub project page, along with working online demonstration resources on the VerifyXML site. Combining Oracle BPM with these open source tools provides a comprehensive simple and elegant solution set. Development times are slashed and rapid prototyping is enabled. Also existing data sources can be integrated using open data formats with either XML or JSON along with CRUD accessing via the Open-XDX Java component. The Open-XDX tool is a code-free approach where data mapping is configured as templates using visual drag and drop in the CAM Editor open source tool.  XML or JSON is then automatically generated or processed (output or input) and appropriate SQL statements created to support the data accessing.   Also included is the ability to integrate with fillable PDF forms via the XML templates and the Java PDF form filling library.  Again minimal Java coding is needed to associate the XML source content with the PDF named fields.  The Oracle BPM forms can be automatically generated from XSD schema definitions that are built from the data mapping templates.  This dramatically simplifies development work as all the integration artifacts needed are created by the open source editor toolset. The developer level video is designed as a tutorial with segments, hands-on demonstrations and reviews.  This allows developers to learn the techniques and approaches used in incremental steps. The intended audience ranges from data analysts to developers and assumes only entry level Java skills and knowledge.  Most actions are menu driven while Java coding is limited to simply configuring values and parameters along with performing builds and deployments from JDeveloper and Oracle WLS.   Additional existing Oracle online training resources can be referenced on Oracle BPM and WLS that cover other normal delivery aspects such as user management and application deployment.

    Read the article

  • Oracle Database I/O Performance Tuning Using Benchmark Factory

    The real test of how heavily an Oracle database will tax its underlying I/O subsystem and related infrastructure is to actually tax that infrastructure using representative database application workloads. Jim Czuprynski tells you how to choose appropriate database schema(s) for realistic testing, how to create example TPC-E and TPC-H database schemas and how to perform initial loading of these schemas using Quest Benchmark Factory.

    Read the article

  • Updated sp_indexinfo

    - by TiborKaraszi
    It was time to give sp_indexinfo some love. The procedure is meant to be the "ultimate" index information procedure, providing lots of information about all indexes in a database or all indexes for a certain table. Here is what I did in this update: Changed the second query that retrieves missing index information so it generates the index name (based on schema name, table name and column named - limited to 128 characters). Re-arranged and shortened column names to make output more compact and more...(read more)

    Read the article

  • Microsoft sort un plug-in "Windows Azure pour Eclipse" pour faciliter le déploiement d'applications Java sur son Cloud

    Microsoft sort un plug-in Windows Azure pour Eclipse Pour faciliter le déploiement d'applications Java sur son Cloud Les développeurs Java peuvent désormais utiliser l'environnement de développement Eclipse pour le paquetage et le déploiement des applications Java sur la plate-forme Cloud de Microsoft Windows Azure. Microsoft vient de dévoiler la version CTP (Community Technology Preview) du plugin « Windows Azure for Eclipse ». Ce plugin offre aux utilisateurs une interface graphique pour la configuration et l'accès distant aux applications afin d'assurer leurs maintenances, des fonctions pour la validation du schéma et de l'auto-complétion pour les fichiers de configuration Azure...

    Read the article

< Previous Page | 161 162 163 164 165 166 167 168 169 170 171 172  | Next Page >