Search Results

Search found 3875 results on 155 pages for 'hibernate criteria'.

Page 148/155 | < Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >

  • Linq query challenge - can this be done?

    - by vdh_ant
    My table structure is as follows: Person 1-M PesonAddress Person 1-M PesonPhone Person 1-M PesonEmail Person 1-M Contract Contract M-M Program Contract M-1 Organization At the end of this query I need a populated object graph where each person has their: PesonAddress's PesonPhone's PesonEmail's PesonPhone's Contract's - and this has its respective Program's Now I had the following query and I thought that it was working great, but it has a couple of problems: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") where people.Contract.Any( contract => (param.OrganizationId == contract.OrganizationId) && contract.Program.Any( contractProgram => (param.ProgramId == contractProgram.ProgramId))) select people; The problem is that it filters the person to the criteria but not the Contracts or the Contract's Programs. It brings back all Contracts that each person has not just the ones that have an OrganizationId of x and the same goes for each of those Contract's Programs respectively. What I want is only the people that have at least one contract with an OrgId of x with and where that contract has a Program with the Id of y... and for the object graph that is returned to have only the contracts that match and programs within that contract that match. I kinda understand why its not working, but I don't know how to change it so it is working... This is my attempt thus far: from people in ctx.People.Include("PersonAddress") .Include("PersonLandline") .Include("PersonMobile") .Include("PersonEmail") .Include("Contract") .Include("Contract.Program") let currentContracts = from contract in people.Contract where (param.OrganizationId == contract.OrganizationId) select contract let currentContractPrograms = from contractProgram in currentContracts let temp = from x in contractProgram.Program where (param.ProgramId == contractProgram.ProgramId) select x where temp.Any() select temp where currentContracts.Any() && currentContractPrograms.Any() select new Person { PersonId = people.PersonId, FirstName = people.FirstName, ..., ...., MiddleName = people.MiddleName, Surname = people.Surname, ..., ...., Gender = people.Gender, DateOfBirth = people.DateOfBirth, ..., ...., Contract = currentContracts, ... }; //This doesn't work But this has several problems (where the Person type is an EF object): I am left to do the mapping by myself, which in this case there is quite a lot to map When ever I try to map a list to a property (i.e. Scholarship = currentScholarships) it says I can't because IEnumerable is trying to be cast to EntityCollection Include doesn't work Hence how do I get this to work. Keeping in mind that I am trying to do this as a compiled query so I think that means anonymous types are out.

    Read the article

  • SEO Help with Pages Indexed by Google

    - by Joe Majewski
    I'm working on optimizing my site for Google's search engine, and lately I've noticed that when doing a "site:www.joemajewski.com" query, I get results for pages that shouldn't be indexed at all. Let's take a look at this page, for example: http://www.joemajewski.com/wow/profile.php?id=3 I created my own CMS, and this is simply a breakdown of user id #3's statistics, which I noticed is indexed by Google, although it shouldn't be. I understand that it takes some time before Google's results reflect accurately on my site's content, but this has been improperly indexed for nearly six months now. Here are the precautions that I have taken: My robots.txt file has a line like this: Disallow: /wow/profile.php* When running the url through Google Webmaster Tools, it indicates that I did, indeed, correctly create the disallow command. It did state, however, that a page that doesn't get crawled may still get displayed in the search results if it's being linked to. Thus, I took one more precaution. In the source code I included the following meta data: <meta name="robots" content="noindex,follow" /> I am assuming that follow means to use the page when calculating PageRank, etc, and the noindex tells Google to not display the page in the search results. This page, profile.php, is used to take the $_GET['id'] and find the corresponding registered user. It displays a bit of information about that user, but is in no way relevant enough to warrant a display in the search results, so that is why I am trying to stop Google from indexing it. This is not the only page Google is indexing that I would like removed. I also have a WordPress blog, and there are many category pages, tag pages, and archive pages that I would like removed, and am doing the same procedures to attempt to remove them. Can someone explain how to get pages removed from Google's search results, and possibly some criteria that should help determine what types of pages that I don't want indexed. In terms of my WordPress blog, the only pages that I truly want indexed are my articles. Everything else I have tried to block, with little luck from Google. Can someone also explain why it's bad to have pages indexed that don't provide any new or relevant content, such as pages for WordPress tags or categories, which are clearly never going to receive traffic from Google. Thanks!

    Read the article

  • How can I provide values for non-grouped columns in NHibernate?

    - by ddc0660
    I have a criteria query: Session.CreateCriteria<Sell043Report>() .SetProjection(.ProjectionList() .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.location)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.agent)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.cusip)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.SettlementDate)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.salePrice)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.foreignFx)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.batchNumber)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.origSaleDate)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.planName)) .Add(LambdaProjection.GroupProperty<Sell043Report>(r => r.dateTimeAdded)) .Add(LambdaProjection.Sum<Sell043Report>(r => r.shares)) .Add(LambdaProjection.Sum<Sell043Report>(r => r.netMoney)) .Add(LambdaProjection.Sum<Sell043Report>(r => r.grossMoney)) .Add(LambdaProjection.Sum<Sell043Report>(r => r.taxWithheld)) .Add(LambdaProjection.Sum<Sell043Report>(r => r.fees))) .List<Sell043Report>(); that generates the following SQL: SELECT this_.location as y0_, this_.agent as y1_, this_.cusip as y2_, this_.SettlementDate as y3_, this_.salePrice as y4_, this_.foreignFx as y5_, this_.batchNumber as y6_, this_.origSaleDate as y7_, this_.planName as y8_, this_.dateTimeAdded as y9_, sum(this_.shares) as y10_, sum(this_.netMoney) as y11_, sum(this_.grossMoney) as y12_, sum(this_.taxWithheld) as y13_, sum(this_.fees) as y14_ FROM MIS_IPS_Sell043Report this_ GROUP BY this_.location, this_.agent, this_.cusip, this_.SettlementDate, this_.salePrice, this_.foreignFx, this_.batchNumber, this_.origSaleDate, this_.planName, this_.dateTimeAdded however the Sell043Report table has additional columns than those listed in the SELECT statement so I'm receiving this error when attempting to get a list of Sell043Reports: System.ArgumentException: The value "System.Object[]" is not of type "xyz.Sell043Report" and cannot be used in this generic collection. I suspect the problem is that I'm not selecting all of the columns for a Sell043Report and so it doesn't know how to map the dataset to the object. I'm trying to achieve something like this: SELECT this_.location as y0_, this_.agent as y1_, this_.cusip as y2_, this_.SettlementDate as y3_, this_.salePrice as y4_, this_.foreignFx as y5_, this_.batchNumber as y6_, this_.origSaleDate as y7_, this_.planName as y8_, this_.dateTimeAdded as y9_, sum(this_.shares) as y10_, sum(this_.netMoney) as y11_, sum(this_.grossMoney) as y12_, sum(this_.taxWithheld) as y13_, sum(this_.fees) as y14_, '' as Address1, '' as Address2 // etc FROM MIS_IPS_Sell043Report this_ GROUP BY this_.location, this_.agent, this_.cusip, this_.SettlementDate, this_.salePrice, this_.foreignFx, this_.batchNumber, this_.origSaleDate, this_.planName, this_.dateTimeAdded How can I do this using NHibernate?

    Read the article

  • [Reloaded] Error while sorting filtered data from a GridView

    - by Bogdan M
    Hello guys, I have an error I cannot solve, on a ASP.NET website. One of its pages - Countries.aspx, has the following controls: a CheckBox called "CheckBoxNAME": < asp:CheckBox ID="CheckBoxNAME" runat="server" Text="Name" /> a TextBox called "TextBoxName": < asp:TextBox ID="TextBoxNAME" runat="server" Width="100%" Wrap="False"> < /asp:TextBox> a SQLDataSource called "SqlDataSourceCOUNTRIES", that selects all records from a Table with 3 columns - ID (Number, PK), NAME (Varchar2(1000)), and POPULATION (Number) called COUNTRIES < asp:SqlDataSource ID="SqlDataSourceCOUNTRIES" runat="server" ConnectionString="< %$ ConnectionStrings:myDB %> " ProviderName="< %$ ConnectionStrings:myDB.ProviderName %> " SelectCommand="SELECT COUNTRIES.ID, COUNTRIES.NAME, COUNTRIES.POPULATION FROM COUNTRIES ORDER BY COUNTRIES.NAME, COUNTRIES.ID"> < /asp:SqlDataSource> a GridView called GridViewCOUNTRIES: < asp:GridView ID="GridViewCOUNTRIES" runat="server" AllowPaging="True" AllowSorting="True" AutoGenerateColumns="False" DataSourceID="SqlDataSourceCOUNTRIES" DataKeyNames="ID" DataMember="DefaultView"> < Columns> < asp:CommandField ShowSelectButton="True" /> < asp:BoundField DataField="ID" HeaderText="Id" SortExpression="ID" /> < asp:BoundField DataField="NAME" HeaderText="Name" SortExpression="NAME" /> < asp:BoundField DataField="POPULATION" HeaderText="Population" SortExpression="POPULATION" /> < /Columns> < /asp:GridView> a Button called ButtonFilter: < asp:Button ID="ButtonFilter" runat="server" Text="Filter" onclick="ButtonFilter_Click"/> This is the onclick event: protected void ButtonFilter_Click(object sender, EventArgs e) { Response.Redirect("Countries.aspx?" + (this.CheckBoxNAME.Checked ? string.Format("NAME={0}", this.TextBoxNAME.Text) : string.Empty)); } Also, this is the main onload event of the page: protected void Page_Load(object sender, EventArgs e) { if (Page.IsPostBack == false) { if (Request.QueryString.Count != 0) { Dictionary parameters = new Dictionary(); string commandTextFormat = string.Empty; if (Request.QueryString["NAME"] != null) { if (commandTextFormat != string.Empty && commandTextFormat.EndsWith("AND") == false) { commandTextFormat += "AND"; } commandTextFormat += " (UPPER(COUNTRIES.NAME) LIKE '%' || :NAME || '%') "; parameters.Add("NAME", Request.QueryString["NAME"].ToString()); } this.SqlDataSourceCOUNTRIES.SelectCommand = string.Format("SELECT COUNTRIES.ID, COUNTRIES.NAME, COUNTRIES.POPULATION FROM COUNTRIES WHERE {0} ORDER BY COUNTRIES.NAME, COUNTRIES.ID", commandTextFormat); foreach (KeyValuePair parameter in parameters) { this.SqlDataSourceCOUNTRIES.SelectParameters.Add(parameter.Key, parameter.Value.ToUpper()); } } } } Basicly, the page displays in the GridViewCOUNTRIES all the records of table COUNTRIES. The scenario is the following: - the user checks the CheckBox; - the user types a value in the TextBox (let's say "ch"); - the user presses the Button; - the page loads displaying only the records that match the filter criteria (in this case, all the countries that have names containing "Ch"); - the user clicks on the header of the column called "Name" in order to sort the data in the GridView Then, I get the following error: ORA-01036: illegal variable name/number. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.Data.OracleClient.OracleException: ORA-01036: illegal variable name/number Source Error: An unhandled exception was generated during the execution of the current web request. Information regarding the origin and location of the exception can be identified using the exception stack trace below. Any help is greatly appreciated, tnks. PS: I'm using ASP.NET 3.5, under Visual Studio 2008, with an OracleXE database.

    Read the article

  • Is a many-to-many relationship with extra fields the right tool for my job?

    - by whichhand
    Previously had a go at asking a more specific version of this question, but had trouble articulating what my question was. On reflection that made me doubt if my chosen solution was correct for the problem, so this time I will explain the problem and ask if a) I am on the right track and b) if there is a way around my current brick wall. I am currently building a web interface to enable an existing database to be interrogated by (a small number of) users. Sticking with the analogy from the docs, I have models that look something like this: class Musician(models.Model): first_name = models.CharField(max_length=50) last_name = models.CharField(max_length=50) dob = models.DateField() class Album(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) class Instrument(models.Model): artist = models.ForeignKey(Musician) name = models.CharField(max_length=100) Where I have one central table (Musician) and several tables of associated data that are related by either ForeignKey or OneToOneFields. Users interact with the database by creating filtering criteria to select a subset of Musicians based on data the data on the main or related tables. Likewise, the users can then select what piece of data is used to rank results that are presented to them. The results are then viewed initially as a 2 dimensional table with a single row per Musician with selected data fields (or aggregates) in each column. To give you some idea of scale, the database has ~5,000 Musicians with around 20 fields of related data. Up to here is fine and I have a working implementation. However, it is important that I have the ability for a given user to upload there own annotation data sets (more than one) and then filter and order on these in the same way they can with the existing data. The way I had tried to do this was to add the models: class UserDataSets(models.Model): user = models.ForeignKey(User) name = models.CharField(max_length=100) description = models.CharField(max_length=64) results = models.ManyToManyField(Musician, through='UserData') class UserData(models.Model): artist = models.ForeignKey(Musician) dataset = models.ForeignKey(UserDataSets) score = models.IntegerField() class Meta: unique_together = (("artist", "dataset"),) I have a simple upload mechanism enabling users to upload a data set file that consists of 1 to 1 relationship between a Musician and their "score". Within a given user dataset each artist will be unique, but different datasets are independent from each other and will often contain entries for the same musician. This worked fine for displaying the data, starting from a given artist I can do something like this: artist = Musician.objects.get(pk=1) dataset = UserDataSets.objects.get(pk=5) print artist.userdata_set.get(dataset=dataset.pk) However, this approach fell over when I came to implement the filtering and ordering of query set of musicians based on the data contained in a single user data set. For example, I could easily order the query set based on all of the data in the UserData table like this: artists = Musician.objects.all().order_by(userdata__score) But that does not help me order by the results of a given single user dataset. Likewise I need to be able to filter the query set based on the "scores" from different user data sets (eg find all musicians with a score 5 in dataset1 and < 2 in dataset2). Is there a way of doing this, or am I going about the whole thing wrong?

    Read the article

  • Accessing parent-level controls from inside a ComboBox's child controls

    - by eponymous23
    I have XAML similar to this: <ListBox ItemsSource="{Binding SearchCriteria, Source={StaticResource model}}" SelectionChanged="cboSearchCriterionType_SelectionChanged"> <ListBox.ItemTemplate> <DataTemplate> <StackPanel Name="spCriterion" Orientation="Horizontal" Height="20"> <ComboBox Name="cboSearchCriterionType" Width="120" SelectionChanged="cboSearchCriterionType_SelectionChanged"> <ComboBox.Items> <ComboBoxItem IsSelected="True" Content="Anagram Match" /> <ComboBoxItem Content="Pattern Match" /> <ComboBoxItem Content="Subanagram Match" /> <ComboBoxItem Content="Length" /> <ComboBoxItem Content="Number of Vowels" /> <ComboBoxItem Content="Number of Anagrams" /> <ComboBoxItem Content="Number of Unique Letters" /> </ComboBox.Items> </ComboBox> <TextBox x:Name="SearchSpec" Text="{Binding SearchSpec}" /> <TextBox x:Name="MinValue" Text="{Binding MinValue}" Visibility="Collapsed" /> <TextBox x:Name="MaxValue" Text="{Binding MaxValue}" Visibility="Collapsed" /> </StackPanel> </DataTemplate> </ListBox.ItemTemplate> As you can tell from the markup, I have a listbox that is bound to a collection of SearchCriterion objects (collectively contained in a SearchCriteria object). The idea is that the user can add/remove criterion items from the criteria, each criterion is represented by a listbox item. Inside the listbox item I have a combobox and three textboxes. What I'm trying to do is change the visibility of the TextBox controls depending on the item that is selected in the ComboBox. For example, if the user selects "Pattern Match" then I want to show only the first textbox and hide the latter two; conversely, if the user selects "Length" or any of the "Number of..." items, then I want to hide the first TextBox and show the latter two. What is the best way to achieve this? I was hoping to do something simple in the SelectionChanged event handler for the listbox but the textbox controls are presumably out of the SelectionChanged event's scope. Do I have to programmatically traverse the control hierarchy and find the controls?

    Read the article

  • Between/Timerange LINQ

    - by dezza
    My intention here is to select all entries (Bookings) between "begin" (begin_prefix) and "end" (end_prefix) BUT! The important thing is: If I have a booking at 07:25-10:00 - you query for 09:00-10:00 it should still show the booking because it reserves the room until 10 no matter what .. So .. 07.25-10.00 booking means query for 09:00-10.00 still returns a list of bookings within 09:00-10.00 (which means 07.25-10.00 is included) public static List<booking> Today(DateTime begin, DateTime end) { try { IFormatProvider Culturez = new CultureInfo(ConfigurationManager.AppSettings["locale"].ToString(), true); DateTime begin_prefix = DateTime.ParseExact(begin.ToString(), "dd-MM-yyyy HH:mm:ss", Culturez); DateTime end_prefix = DateTime.ParseExact(end.ToString(), "dd-MM-yyyy HH:mm:ss", Culturez); dbDataContext db = new dbDataContext(); // gives bookings BEFORE begin_prefix (why?) IQueryable<booking> bQ = from b in db.bookings where begin_prefix >= b.Starts && b.Ends <= end_prefix && b.Ends > b.Starts && b.pointsbookings.Count > 0 select b; // ^gives bookings BEFORE begin_prefix (why?) List<booking> bL = bQ.ToList(); return bL; } catch (Exception) { throw; } } I've tried getting this right for some time now .. Seems everytime I correct it to something new, a new overlap or selection outside the two begin/end dates seem to appear :( UPDATE CRITERIA and SOURCE: Bookings has to be WITHIN "begin_prefix" and "end_prefix" or on the exact same time .. .. currently the above code gives me bookings BEFORE begin_prefix date, which is not intentioned! We're in 2011, I got bookings from 2010 as well! ** NEW!! UPDATED: This is what I have: SEARCH.START = BOOKING.START BOOKING.END <= SEARCH.END ... the problem comes up when .. BOOKING entry: 10:00(Start)-14:00(End) This means according to above: 08.59 = 10.00 (SEARCH.START = BOOKING.START) It will never include it. But it should, since this is the same room and the seats are booked individually!

    Read the article

  • Can XPath concatenate two nodeset values? (for use in XForms)

    - by iHeartGreek
    Hi! I am wanting to concatenate two nodeset values using XPath in XForms. I know that XPath has a concat(string, string) function, but how would I go about concatenating two nodeset values? BEGIN EDIT: I tried concat function.. I tried this.. and variations of it to make it work, but it doesn't <xf:value ref="concat(instance('param_choices')/choice/root, .)"/> END EDIT Below is a simplified code example of what I am trying to achieve. XForms model: <xf:instance id="param_choices" xmlns=""> <choices> <root label="Param Choices">/param</root> <choice label="Name">/@AAA</choice> <choice label="Value">/@BBB</choice> </choices> </xf:instance> XForms ui code that I currently have: <xf:select ref="instance('criteria_data')/criteria/criterion" appearance="full"> <xf:label>Param choices:</xf:label> <br/> <xf:itemset nodeset="instance('param_choices')/choice"> <xf:label ref="@label"></xf:label> <xf:value ref="."></xf:value> </xf:itemset> </xf:select> (if user selects "Name" checkbox..) the XML output is: <criterion>/@BBB</criterion> However! I want to combine the root nodeset value with the current choice nodeset value. Essentially: <xf:value ref="(instance('definition_choices')/choice/root) + ."/> to achieve the following XML output: <criterion>/param/@BBB</criterion> Any suggestions on how to do this? (I am fairly new to XPath and XForms) p.s. what I am asking makes sense to me when I typed it out, but if you have trouble figuring out what I'm asking, just let me know.. thanks!

    Read the article

  • Select latest group by in nhibernate

    - by Kendrick
    I have Canine and CanineHandler objects in my application. The CanineHandler object has a PersonID (which references a completely different database), an EffectiveDate (which specifies when a handler started with the canine), and a FK reference to the Canine (CanineID). Given a specific PersonID, I want to find all canines they're currently responsible for. The (simplified) query I'd use in SQL would be: Select Canine.* from Canine inner join CanineHandler on(CanineHandler.CanineID=Canine.CanineID) inner join (select CanineID,Max(EffectiveDate) MaxEffectiveDate from caninehandler group by CanineID) as CurrentHandler on(CurrentHandler.CanineID=CanineHandler.CanineID and CurrentHandler.MaxEffectiveDate=CanineHandler.EffectiveDate) where CanineHandler.HandlerPersonID=@PersonID Edit: Added mapping files below: <class name="CanineHandler" table="CanineHandler" schema="dbo"> <id name="CanineHandlerID" type="Int32"> <generator class="identity" /> </id> <property name="EffectiveDate" type="DateTime" precision="16" not-null="true" /> <property name="HandlerPersonID" type="Int64" precision="19" not-null="true" /> <many-to-one name="Canine" class="Canine" column="CanineID" not-null="true" access="field.camelcase-underscore" /> </class> <class name="Canine" table="Canine"> <id name="CanineID" type="Int32"> <generator class="identity" /> </id> <property name="Name" type="String" length="64" not-null="true" /> ... <set name="CanineHandlers" table="CanineHandler" inverse="true" order-by="EffectiveDate desc" cascade="save-update" access="field.camelcase-underscore"> <key column="CanineID" /> <one-to-many class="CanineHandler" /> </set> <property name="IsDeleted" type="Boolean" not-null="true" /> </class> I haven't tried yet, but I'm guessing I could do this in HQL. I haven't had to write anything in HQL yet, so I'll have to tackle that eventually anyway, but my question is whether/how I can do this sub-query with the criterion/subqueries objects. I got as far as creating the following detached criteria: DetachedCriteria effectiveHandlers = DetachedCriteria.For<Canine>() .SetProjection(Projections.ProjectionList() .Add(Projections.Max("EffectiveDate"),"MaxEffectiveDate") .Add(Projections.GroupProperty("CanineID"),"handledCanineID") ); but I can't figure out how to do the inner join. If I do this: Session.CreateCriteria<Canine>() .CreateCriteria("CanineHandler", "handler", NHibernate.SqlCommand.JoinType.InnerJoin) .List<Canine>(); I get an error "could not resolve property: CanineHandler of: OPS.CanineApp.Model.Canine". Obviously I'm missing something(s) but from the documentation I got the impression that should return a list of Canines that have handlers (possibly with duplicates). Until I can make this work, adding the subquery isn't going to work... I've found similar questions, such as http://stackoverflow.com/questions/747382/only-get-latest-results-using-nhibernate but none of the answers really seem to apply with the kind of direct result I'm looking for. Any help or suggestion is greatly appreciated.

    Read the article

  • Complex multiple join query across 3 tables

    - by Keir Simmons
    I have 3 tables: shops, PRIMARY KEY cid,zbid shop_items, PRIMARY KEY id shop_inventory, PRIMARY KEY id shops a is related to shop_items b by the following: a.cid=b.cid AND a.zbid=b.szbid shops is not directly related to shop_inventory shop_items b is related to shop_inventory c by the following: b.cid=c.cid AND b.id=c.iid Now, I would like to run a query which returns a.* (all columns from shops). That would be: SELECT a.* FROM shops a WHERE a.cid=1 AND a.zbid!=0 Note that the WHERE clause is necessary. Next, I want to return the number of items in each shop: SELECT a.*, COUNT(b.id) items FROM shops a LEFT JOIN shop_items b ON b.cid=a.cid AND b.szbid=a.zbid WHERE a.cid=1 GROUP BY b.szbid,b.cid As you can see, I have added a GROUP BY clause for this to work. Next, I want to return the average price of each item in the shop. This isn't too hard: SELECT a.*, COUNT(b.id) items, AVG(COALESCE(b.price,0)) average_price FROM shops a LEFT JOIN shop_items b ON b.cid=a.cid AND b.szbid=a.zbid WHERE a.cid=1 GROUP BY b.szbid,b.cid My next criteria is where it gets complicated. I also want to return the unique buyers for each shop. This can be done by querying shop_inventory c, getting the COUNT(DISTINCT c.zbid). Now remember how these tables are related; this should only be done for the rows in c which relate to an item in b which is owned by the respective shop, a. I tried doing the following: SELECT a.*, COUNT(b.id) items, AVG(COALESCE(b.price,0)) average_price, COUNT(DISTINCT c.zbid) FROM shops a LEFT JOIN shop_items b ON b.cid=a.cid AND b.szbid=a.zbid LEFT JOIN shop_inventory c ON c.cid=b.cid AND c.iid=b.id WHERE a.cid=1 GROUP BY b.szbid,b.cid However, this did not work as it messed up the items value. What is the proper way to achieve this result? I also want to be able to return the total number of purchases made in each shop. This would be done by looking at shop_inventory c and adding up the c.quantity value for each shop. How would I add that in as well?

    Read the article

  • Asp.Net Scrapping Grid Pages

    - by SH
    I need cod in C#. Look, i am trying to post the search.aspx page which contains Asp.Net grid. When grid is rendered it loads very first page on the screen and then there are number of pages in the grid header. I scrap first page, and now i want to move on to the next page. All this is being done using following code: HttpWebRequest myRequest = (HttpWebRequest)WebRequest.Create("http://pubrec3.hillsclerk.com/oncore/search.aspx?" + param); myRequest.Method = "GET"; myRequest.KeepAlive = false; HttpWebResponse webresponse; try { webresponse = (HttpWebResponse)myRequest.GetResponse(); Encoding enc = System.Text.Encoding.GetEncoding(1252); StreamReader loResponseStream = new StreamReader(webresponse.GetResponseStream(), enc); string r = loResponseStream.ReadToEnd(); loResponseStream.Close(); webresponse.Close(); //if (GetRecordCount(r)) ExtractResultTable(bd, ed, r); } catch (Exception ex) { } The above code grabs first page... and now i have to move on to the next page. This is the link which produces a grid with 3 pages. http://pubrec3.hillsclerk.com/oncore/search.aspx?bd=01/01/2008&ed=12/31/2008&bt=O&lb=1000000&ub=1000000000&d=5/6/2010&pt=-1&dt=D,%20MTG&st=consideration Using above code i need to load the 2nd page with the same search criteria but the records found in 2nd page. and then so on. I know there is a trick to navigate through the grid pages. I used it but it did not work on this page. the trick is, you can pass __EVENTTARGET and __EVENTARGUMENT in query string to navigate through the gird but it does not work on this website. I am desperately searching a way, how to cope with this website. i really want this done. i do not want any code but a way to nevigate throgh the grid using query string if it does work. Otherwise please be specific to the problem.

    Read the article

  • How to build this encryption system that allows multiple users/objects.

    - by Patrick
    Hello! I am trying to figure out how to create an optimal solution for my project. I made this simple picture in Photoshop to try to illustrate the problem and how i want it (if possible). Illustrative image Ill also try to explain it based on the picture. First off we have a couple of objects to the left, these objects all get encrypted with their own encryption key (EKey on the picture) and then stored in the database. On the other side we have different users placed into roles (one user can be in a lot of roles) and the roles are associated with different objects. So one person only has access the to the objects that the role provides. So for instance Role A might have access to Object A and B. Role B have access only to Object C and Role C have access to all objects. Nothing strange in that, right? Different roles have different objects that they can access. Now to the problem part. Each user has to login with his/her username/password and then he/she gets access to the objects that his/her roles provide. All the objects are encrypted so she needs to get a decryption key somehow. I don't want to store the encryption key as a text string on the server. It should be, if possible, decrypted using the users password (along with the role) or similar. That way you have to be a user on the server in order to decrypt an object an to work with it. I was thinking about making a public/private key encryption system, but i am kinda stuck on how to give the different users the decryption key to the objects. Since i need to be able to move users to and from roles, add new users, add new roles and create/delete objects. There will be one administrator that then adds some data to allow the users in that role to get the decryption key to decrypt the object. Nothing is static and i am trying to get a picture of how this can be built or if there is a far better solution. The only criteria are: -Encrypted objects. -Decryption key should not be stored as text. -Different users have access to different objects. -Does NOT have to have roles.

    Read the article

  • Improved way to build nested array of unique values in javascript

    - by dualmon
    The setup: I have a nested html table structure that displays hierarchical data, and the individual rows can be hidden or shown by the user. Each row has a dom id that is comprised of the level number plus the primary key for the record type on that level. I have to have both, because each level is from a different database table, so the primary key alone is not unique in the dom. example: id="level-1-row-216" I am storing the levels and rows of the visible elements in a cookie, so that when the page reloads the same rows the user had open are can be shown automatically. I don't store the full map of dom ids, because I'm concerned about it getting too verbose, and I want to keep my cookie under 4Kb. So I convert the dom ids to a compact json object like this, with one property for each level, and a unique array of primary keys under each level: { 1:[231,432,7656], 2:[234,121], 3:[234,2], 4:[222,423], 5:[222] } With this structure stored in a cookie, I feed it to my show function and restore the user's previous disclosure state when the page loads. The area for improvement: I'm looking for better option for reducing my map of id selectors down to this compact format. Here is my function: function getVisibleIds(){ // example dom id: level-1-row-216-sub var ids = $("tr#[id^=level]:visible").map(function() { return this.id; }); var levels = {}; for(var i in ids ) { var id = ids[i]; if (typeof id == 'string'){ if (id.match(/^level/)){ // here we extract the number for level and row var level = id.replace(/.*(level-)(\d*)(.*)/, '$2'); var row = id.replace(/.*(row-)(\d*)(.*)/, '$2'); // *** Improvement here? *** // This works, but it seems klugy. In PHP it's one line (see below): if(levels.hasOwnProperty(level)){ if($.inArray(parseInt(row, 10) ,levels[level]) == -1){ levels[level].push(parseInt(row, 10)); } } else { levels[level] = [parseInt(row, 10)]; } } } } return levels; } If I were doing it in PHP, I'd build the compact array like this, but I can't figure it out in javascript: foreach($ids as $id) { if (/* the criteria */){ $level = /* extract it from $id */; $row = /* extract it from $id */; $levels[$level][$row]; } }

    Read the article

  • while running mvn jetty:run showing the following error ..

    - by munna
    C:\source\myprojectmvn jetty:run [INFO] Scanning for projects... [INFO] ------------------------------------------------------------------------ [INFO] Building AppFuse Spring MVC Application [INFO] task-segment: [jetty:run] [INFO] ------------------------------------------------------------------------ [INFO] Preparing jetty:run [WARNING] POM for 'xfire:xfire-jsr181-api:pom:1.0-M1:compile' is invalid. Its dependencies (if any) will NOT be available to the current build. [INFO] [warpath:add-classes {execution: default}] [INFO] [aspectj:compile {execution: default}] [INFO] [native2ascii:native2ascii {execution: native2ascii-utf8}] [INFO] [native2ascii:native2ascii {execution: native2ascii-8859_1}] [INFO] [resources:resources {execution: default-resources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. b uild is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 12 resources [INFO] Copying 1 resource [INFO] Copying 26 resources [INFO] Copying 26 resources [INFO] [compiler:compile {execution: default-compile}] [INFO] Nothing to compile - all classes are up to date [INFO] [resources:testResources {execution: default-testResources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. b uild is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 4 resources [INFO] Copying 9 resources [INFO] Preparing hibernate3:hbm2ddl [WARNING] Removing: hbm2ddl from forked lifecycle, to prevent recursive invocati on. [INFO] [warpath:add-classes {execution: default}] [INFO] [aspectj:compile {execution: default}] [INFO] [native2ascii:native2ascii {execution: native2ascii-utf8}] [INFO] [native2ascii:native2ascii {execution: native2ascii-8859_1}] [INFO] [resources:resources {execution: default-resources}] [WARNING] File encoding has not been set, using platform encoding Cp1252, i.e. b uild is platform dependent! [WARNING] Using platform encoding (Cp1252 actually) to copy filtered resources, i.e. build is platform dependent! [INFO] Copying 12 resources [INFO] Copying 1 resource [INFO] Copying 26 resources [INFO] Copying 26 resources [INFO] Copying 26 resources [INFO] Copying 26 resources [INFO] [hibernate3:hbm2ddl {execution: default}] [INFO] Configuration XML file loaded: file:/C:/source/myproject/src/main/resourc es/hibernate.cfg.xml [INFO] Configuration XML file loaded: file:/C:/source/myproject/src/main/resourc es/hibernate.cfg.xml [INFO] Configuration Properties file loaded: C:\source\myproject\target\classes\ jdbc.properties alter table user_role drop foreign key FK143BF46A4FD90D75; alter table user_role drop foreign key FK143BF46AF503D155; drop table if exists app_user; drop table if exists role; drop table if exists user_role; create table app_user (id bigint not null auto_increment, account_expired bit no t null, account_locked bit not null, address varchar(150), city varchar(50) not null, country varchar(100), postal_code varchar(15) not null, province varchar(1 00), credentials_expired bit not null, email varchar(255) not null unique, accou nt_enabled bit, first_name varchar(50) not null, last_name varchar(50) not null, password varchar(255) not null, password_hint varchar(255), phone_number varcha r(255), username varchar(50) not null unique, version integer, website varchar(2 55), primary key (id)) ENGINE=InnoDB; create table role (id bigint not null auto_increment, description varchar(64), n ame varchar(20), primary key (id)) ENGINE=InnoDB; create table user_role (user_id bigint not null, role_id bigint not null, primar y key (user_id, role_id)) ENGINE=InnoDB; alter table user_role add index FK143BF46A4FD90D75 (role_id), add constraint FK1 43BF46A4FD90D75 foreign key (role_id) references role (id); alter table user_role add index FK143BF46AF503D155 (user_id), add constraint FK1 43BF46AF503D155 foreign key (user_id) references app_user (id); [INFO] [compiler:testCompile {execution: default-testCompile}] [INFO] Nothing to compile - all classes are up to date [INFO] [dbunit:operation {execution: test-compile}] [INFO] [jetty:run {execution: default-cli}] [INFO] Configuring Jetty for project: AppFuse Spring MVC Application [INFO] Webapp source directory = C:\source\myproject\src\main\webapp [INFO] web.xml file = C:\source\myproject\src\main\webapp\WEB-INF\web.xml [INFO] Classes = C:\source\myproject\target\classes [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\applicationContext-validation.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\applicationContext.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\dispatcher-servlet.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\menu-config.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\urlrewrite.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\validation.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\validator-rules-custom.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\validator-rules.xml [INFO] Adding extra scan target from pattern: C:\source\myproject\src\main\webap p\WEB-INF\web.xml 2010-06-02 15:13:28.921::INFO: Logging to STDERR via org.mortbay.log.StdErrLog [INFO] Context path = / [INFO] Tmp directory = determined at runtime [INFO] Web defaults = org/mortbay/jetty/webapp/webdefault.xml [INFO] Web overrides = none [INFO] Webapp directory = C:\source\myproject\src\main\webapp [INFO] Starting jetty 6.1.9 ... 2010-06-02 15:13:28.983::INFO: jetty-6.1.9 2010-06-02 15:13:28.248::INFO: No Transaction manager found - if your webapp re quires one, please configure one. 2010-06-02 15:13:28.482:/:INFO: Initializing Spring root WebApplicationContext [myproject] ERROR [main] ContextLoader.initWebApplicationContext(215) | Context initialization failed org.springframework.beans.factory.BeanDefinitionStoreException: IOException pars ing XML document from ServletContext resource [/WEB-INF/xfire-servlet.xml]; nest ed exception is java.io.FileNotFoundException: Could not open ServletContext res ource [/WEB-INF/xfire-servlet.xml] at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:349) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationCon text.refreshBeanFactory(AbstractRefreshableApplicationContext.java:123) at org.springframework.context.support.AbstractApplicationContext.obtain FreshBeanFactory(AbstractApplicationContext.java:423) at org.springframework.context.support.AbstractApplicationContext.refres h(AbstractApplicationContext.java:353) at org.springframework.web.context.ContextLoader.createWebApplicationCon text(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationConte xt(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitiali zed(ContextLoaderListener.java:45) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler. java:540) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.jav a:1220) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java: 510) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448 ) at org.mortbay.jetty.plugin.Jetty6PluginWebAppContext.doStart(Jetty6Plug inWebAppContext.java:110) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHan dlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 130) at org.mortbay.jetty.Server.doStart(Server.java:222) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.plugin.Jetty6PluginServer.start(Jetty6PluginServer. java:132) at org.mortbay.jetty.plugin.AbstractJettyMojo.startJetty(AbstractJettyMo jo.java:357) at org.mortbay.jetty.plugin.AbstractJettyMojo.execute(AbstractJettyMojo. java:293) at org.mortbay.jetty.plugin.AbstractJettyRunMojo.execute(AbstractJettyRu nMojo.java:203) at org.mortbay.jetty.plugin.Jetty6RunMojo.execute(Jetty6RunMojo.java:184 ) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: java.io.FileNotFoundException: Could not open ServletContext resource [/WEB-INF/xfire-servlet.xml] at org.springframework.web.context.support.ServletContextResource.getInp utStream(ServletContextResource.java:116) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:336) ... 51 more 2010-06-02 15:13:29.919::WARN: Failed startup of context org.mortbay.jetty.plug in.Jetty6PluginWebAppContext@1ba4806{/,C:\source\myproject\src\main\webapp} org.springframework.beans.factory.BeanDefinitionStoreException: IOException pars ing XML document from ServletContext resource [/WEB-INF/xfire-servlet.xml]; nest ed exception is java.io.FileNotFoundException: Could not open ServletContext res ource [/WEB-INF/xfire-servlet.xml] at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:349) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationCon text.refreshBeanFactory(AbstractRefreshableApplicationContext.java:123) at org.springframework.context.support.AbstractApplicationContext.obtain FreshBeanFactory(AbstractApplicationContext.java:423) at org.springframework.context.support.AbstractApplicationContext.refres h(AbstractApplicationContext.java:353) at org.springframework.web.context.ContextLoader.createWebApplicationCon text(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationConte xt(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitiali zed(ContextLoaderListener.java:45) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler. java:540) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.jav a:1220) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java: 510) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448 ) at org.mortbay.jetty.plugin.Jetty6PluginWebAppContext.doStart(Jetty6Plug inWebAppContext.java:110) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHan dlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 130) at org.mortbay.jetty.Server.doStart(Server.java:222) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.plugin.Jetty6PluginServer.start(Jetty6PluginServer. java:132) at org.mortbay.jetty.plugin.AbstractJettyMojo.startJetty(AbstractJettyMo jo.java:357) at org.mortbay.jetty.plugin.AbstractJettyMojo.execute(AbstractJettyMojo. java:293) at org.mortbay.jetty.plugin.AbstractJettyRunMojo.execute(AbstractJettyRu nMojo.java:203) at org.mortbay.jetty.plugin.Jetty6RunMojo.execute(Jetty6RunMojo.java:184 ) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) Caused by: java.io.FileNotFoundException: Could not open ServletContext resource [/WEB-INF/xfire-servlet.xml] at org.springframework.web.context.support.ServletContextResource.getInp utStream(ServletContextResource.java:116) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:336) ... 51 more 2010-06-02 15:13:29.152::WARN: Nested in org.springframework.beans.factory.Bean DefinitionStoreException: IOException parsing XML document from ServletContext r esource [/WEB-INF/xfire-servlet.xml]; nested exception is java.io.FileNotFoundEx ception: Could not open ServletContext resource [/WEB-INF/xfire-servlet.xml]: java.io.FileNotFoundException: Could not open ServletContext resource [/WEB-INF/ xfire-servlet.xml] at org.springframework.web.context.support.ServletContextResource.getInp utStream(ServletContextResource.java:116) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:336) at org.springframework.beans.factory.xml.XmlBeanDefinitionReader.loadBea nDefinitions(XmlBeanDefinitionReader.java:310) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:143) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:178) at org.springframework.beans.factory.support.AbstractBeanDefinitionReade r.loadBeanDefinitions(AbstractBeanDefinitionReader.java:149) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:124) at org.springframework.web.context.support.XmlWebApplicationContext.load BeanDefinitions(XmlWebApplicationContext.java:92) at org.springframework.context.support.AbstractRefreshableApplicationCon text.refreshBeanFactory(AbstractRefreshableApplicationContext.java:123) at org.springframework.context.support.AbstractApplicationContext.obtain FreshBeanFactory(AbstractApplicationContext.java:423) at org.springframework.context.support.AbstractApplicationContext.refres h(AbstractApplicationContext.java:353) at org.springframework.web.context.ContextLoader.createWebApplicationCon text(ContextLoader.java:255) at org.springframework.web.context.ContextLoader.initWebApplicationConte xt(ContextLoader.java:199) at org.springframework.web.context.ContextLoaderListener.contextInitiali zed(ContextLoaderListener.java:45) at org.mortbay.jetty.handler.ContextHandler.startContext(ContextHandler. java:540) at org.mortbay.jetty.servlet.Context.startContext(Context.java:135) at org.mortbay.jetty.webapp.WebAppContext.startContext(WebAppContext.jav a:1220) at org.mortbay.jetty.handler.ContextHandler.doStart(ContextHandler.java: 510) at org.mortbay.jetty.webapp.WebAppContext.doStart(WebAppContext.java:448 ) at org.mortbay.jetty.plugin.Jetty6PluginWebAppContext.doStart(Jetty6Plug inWebAppContext.java:110) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.jetty.handler.ContextHandlerCollection.doStart(ContextHan dlerCollection.java:156) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerCollection.doStart(HandlerCollection .java:152) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.handler.HandlerWrapper.doStart(HandlerWrapper.java: 130) at org.mortbay.jetty.Server.doStart(Server.java:222) at org.mortbay.component.AbstractLifeCycle.start(AbstractLifeCycle.java: 39) at org.mortbay.jetty.plugin.Jetty6PluginServer.start(Jetty6PluginServer. java:132) at org.mortbay.jetty.plugin.AbstractJettyMojo.startJetty(AbstractJettyMo jo.java:357) at org.mortbay.jetty.plugin.AbstractJettyMojo.execute(AbstractJettyMojo. java:293) at org.mortbay.jetty.plugin.AbstractJettyRunMojo.execute(AbstractJettyRu nMojo.java:203) at org.mortbay.jetty.plugin.Jetty6RunMojo.execute(Jetty6RunMojo.java:184 ) at org.apache.maven.plugin.DefaultPluginManager.executeMojo(DefaultPlugi nManager.java:490) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoals(Defa ultLifecycleExecutor.java:694) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeStandalone Goal(DefaultLifecycleExecutor.java:569) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoal(Defau ltLifecycleExecutor.java:539) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeGoalAndHan dleFailures(DefaultLifecycleExecutor.java:387) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.executeTaskSegmen ts(DefaultLifecycleExecutor.java:348) at org.apache.maven.lifecycle.DefaultLifecycleExecutor.execute(DefaultLi fecycleExecutor.java:180) at org.apache.maven.DefaultMaven.doExecute(DefaultMaven.java:328) at org.apache.maven.DefaultMaven.execute(DefaultMaven.java:138) at org.apache.maven.cli.MavenCli.main(MavenCli.java:362) at org.apache.maven.cli.compat.CompatibleMain.main(CompatibleMain.java:6 0) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl. java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAcces sorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:597) at org.codehaus.classworlds.Launcher.launchEnhanced(Launcher.java:315) at org.codehaus.classworlds.Launcher.launch(Launcher.java:255) at org.codehaus.classworlds.Launcher.mainWithExitCode(Launcher.java:430) at org.codehaus.classworlds.Launcher.main(Launcher.java:375) 2010-06-02 15:13:29.417::INFO: Started [email protected]:8080 [INFO] Started Jetty Server [INFO] Starting scanner at interval of 3 seconds.

    Read the article

  • Using jQuery and SPServices to Display List Items

    - by Bil Simser
    I had an interesting challenge recently that I turned to Marc Anderson’s wonderful SPServices project for. If you haven’t already seen or used SPServices, please do. It’s a jQuery library that does primarily two things. First, it wraps up all of the SharePoint web services in a nice little AJAX wrapper for use in JavaScript. Second, it enhances the form editing of items in SharePoint so you’re not hacking up your List Form pages. My challenge was simple but interesting. The user wanted to display a SharePoint item page (DispForm.aspx, which already had some customization on it to display related items via this blog post from Codeless Solutions for SharePoint) but launch from an external application using the value of one of the fields in the SharePoint list. For simplicity let’s say my list is a list of customers and the related list is a list of orders for that customer. It would look something like this (click on the item to see the full image): Your first thought might be, that’s easy! Display the customer information using a DataView Web Part and filter the item using a query string to match the customer number. However there are a few problems with this idea: You’ll need to build a custom page and then attach that related orders view to it. This is a bit of a problem because the solution from Codeless Solutions relies on the Title field on the page to be displayed. On a custom page you would have to recreate all of the elements found on the DispForm.aspx page so the related view would work. The DataView Web Part doesn’t look *exactly* like what the out of the box display form page does. Not a huge problem and can be overcome with some CSS style overrides but still, more work. A DVWP showing a single record doesn’t have the same toolbar that you would using the DispForm.aspx. Not a show-stopper and you can rebuild the toolbar but it’s going to potentially require code and then there’s the security trimming, etc. that you have to get right. DVWPs are not automatically updated if you add a column to the list like DispForm.aspx is. Work, work, work. For these reasons I thought it would be easier to take the already existing (modified) DispForm.aspx page and just add some jQuery magic to the page to find the item. Why do we need to find it? DispForm.aspx relies on a querystring parameter called “ID” which then displays whatever that item ID number is in the list. Trouble is, when you’re coming in from an external app via a link, you don’t know what that internal ID is (and frankly shouldn’t). I don’t like exposing internal SharePoint IDs to the outside world for the same reason I don’t do it with database IDs. They’re internal and while it’s find to use on the site itself you don’t want external links using it. It’s volatile and can change (delete one item then re-add it back with the same data and watch any ID references break). The next thought might be to call a SharePoint web service with a CAML query to get the item ID number using some criteria (in this case, the customer number). That’s great if you have that ability but again we had an existing application we were just adding a link to. The last thing I wanted to do was to crack open the code on that sucker and start calling web services (primarily because it’s Java, but really I’m a lazy geek). However if you’re doing this and have access to call a web service that would be an option. Back to this problem, how do I a) find a SharePoint List Item based on some field value other than ID and b) make it low impact so I can just construct a URL to it? That’s where jQuery and SPServices came to the rescue. After spending a few hours of emails back and forth with Marc and a couple of phone calls (and updating jQuery to the latest version, duh!) it was a simple answer. First we need a reference to a) jQuery b) SPServices and c) our script. I just dropped a Content Editor Web Part, the Swiss Army Knives of Web Parts, onto the DispForm.aspx page and added these lines: <script type="text/javascript" src="http://intranet/JavaScript/jquery-1.4.2.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/jquery.SPServices-0.5.3.min.js"></script> <script type="text/javascript" src="http://intranet/JavaScript/RedirectToID.js"> </script> Update it to point to where you keep your scripts located. I prefer to keep them all in Document Libraries as I can make changes to them without having to remote into the server (and on a multiple web front end, that’s just a PITA), it provides me with version control of sorts, and it’s quick to add new plugins and scripts. Now we can look at our RedirectToID.js script. This invokes the SPServices Library to call the GetListItems method of the Lists web service and then rewrites the URL to DispForm.aspx to use the correct SharePoint ID (the internal one). $(document).ready(function(){ var queryStringValues = $().SPServices.SPGetQueryString(); var id = queryStringValues["ID"]; if(id == "0") { var customer = queryStringValues["CustomerNumber"]; var query = "<Query><Where><Eq><FieldRef Name='CustomerNumber'/><Value Type='Text'>" + customer + "</Value></Eq></Where></Query>"; var url = window.location; $().SPServices({ operation: "GetListItems", listName: "Customers", async: false, CAMLQuery: query, completefunc: function (xData, Status) { $(xData.responseXML).find("[nodeName=z:row]").each(function(){ id = $(this).attr("ows_ID"); url = $().SPServices.SPGetCurrentSite() + "/Lists/Customers/DispForm.aspx?ID=" + id; window.location = url; }); } }); } }); What’s happening here? Line 3: We call SPServices.SPGetQueryString to get an array of query string values (a handy function in the library as I had 15 lines of code to do this which is now gone). Line 4: Extract the ID value from the query string Line 6: If we pass in “0” it means we’re looking up a field value. This allows DispForm.aspx to work like normal with SharePoint lists but lookup our values when invoked. Why ID at all? DispForm.aspx doesn’t work unless you pass in something and “0” is a *magic* number that will invoke the page but not lookup a value in the database. Line 8-15: Extract the CustomerNumber query string value, build a CAML query to find it then call the GetListitems method using SPServices Line 16: Process the results in our completefunc to iterate over all the rows (there should only be one) and extract the real ID of the item Line 17-20: Build a new URL based on the site (using a call to SPGetCurrentSite) and append our real ID to redirect to the DispForm.aspx page As you can see, it dynamically creates a CAML query for the call to the web service using the passed in value. You could even make this generic to take in different query strings, one for the field name to search for and the other for the value to find. That way it could be used for any field you want. For example you could bring up the correct item on the DispForm.aspx page based on customer name with something like this: http://myserver/Lists/Customers/DispForm.aspx?ID=0&FilterId=CustomerName&FilterValue=Sony Use your imagination. Some people would opt for building a custom page with a DVWP but if you want to leverage all the functionality of DispForm.aspx this might come in handy if you don’t want to rely on internal SharePoint IDs.

    Read the article

  • C# 4: The Curious ConcurrentDictionary

    - by James Michael Hare
    In my previous post (here) I did a comparison of the new ConcurrentQueue versus the old standard of a System.Collections.Generic Queue with simple locking.  The results were exactly what I would have hoped, that the ConcurrentQueue was faster with multi-threading for most all situations.  In addition, concurrent collections have the added benefit that you can enumerate them even if they're being modified. So I set out to see what the improvements would be for the ConcurrentDictionary, would it have the same performance benefits as the ConcurrentQueue did?  Well, after running some tests and multiple tweaks and tunes, I have good and bad news. But first, let's look at the tests.  Obviously there's many things we can do with a dictionary.  One of the most notable uses, of course, in a multi-threaded environment is for a small, local in-memory cache.  So I set about to do a very simple simulation of a cache where I would create a test class that I'll just call an Accessor.  This accessor will attempt to look up a key in the dictionary, and if the key exists, it stops (i.e. a cache "hit").  However, if the lookup fails, it will then try to add the key and value to the dictionary (i.e. a cache "miss").  So here's the Accessor that will run the tests: 1: internal class Accessor 2: { 3: public int Hits { get; set; } 4: public int Misses { get; set; } 5: public Func<int, string> GetDelegate { get; set; } 6: public Action<int, string> AddDelegate { get; set; } 7: public int Iterations { get; set; } 8: public int MaxRange { get; set; } 9: public int Seed { get; set; } 10:  11: public void Access() 12: { 13: var randomGenerator = new Random(Seed); 14:  15: for (int i=0; i<Iterations; i++) 16: { 17: // give a wide spread so will have some duplicates and some unique 18: var target = randomGenerator.Next(1, MaxRange); 19:  20: // attempt to grab the item from the cache 21: var result = GetDelegate(target); 22:  23: // if the item doesn't exist, add it 24: if(result == null) 25: { 26: AddDelegate(target, target.ToString()); 27: Misses++; 28: } 29: else 30: { 31: Hits++; 32: } 33: } 34: } 35: } Note that so I could test different implementations, I defined a GetDelegate and AddDelegate that will call the appropriate dictionary methods to add or retrieve items in the cache using various techniques. So let's examine the three techniques I decided to test: Dictionary with mutex - Just your standard generic Dictionary with a simple lock construct on an internal object. Dictionary with ReaderWriterLockSlim - Same Dictionary, but now using a lock designed to let multiple readers access simultaneously and then locked when a writer needs access. ConcurrentDictionary - The new ConcurrentDictionary from System.Collections.Concurrent that is supposed to be optimized to allow multiple threads to access safely. So the approach to each of these is also fairly straight-forward.  Let's look at the GetDelegate and AddDelegate implementations for the Dictionary with mutex lock: 1: var addDelegate = (key,val) => 2: { 3: lock (_mutex) 4: { 5: _dictionary[key] = val; 6: } 7: }; 8: var getDelegate = (key) => 9: { 10: lock (_mutex) 11: { 12: string val; 13: return _dictionary.TryGetValue(key, out val) ? val : null; 14: } 15: }; Nothing new or fancy here, just your basic lock on a private object and then query/insert into the Dictionary. Now, for the Dictionary with ReadWriteLockSlim it's a little more complex: 1: var addDelegate = (key,val) => 2: { 3: _readerWriterLock.EnterWriteLock(); 4: _dictionary[key] = val; 5: _readerWriterLock.ExitWriteLock(); 6: }; 7: var getDelegate = (key) => 8: { 9: string val; 10: _readerWriterLock.EnterReadLock(); 11: if(!_dictionary.TryGetValue(key, out val)) 12: { 13: val = null; 14: } 15: _readerWriterLock.ExitReadLock(); 16: return val; 17: }; And finally, the ConcurrentDictionary, which since it does all it's own concurrency control, is remarkably elegant and simple: 1: var addDelegate = (key,val) => 2: { 3: _concurrentDictionary[key] = val; 4: }; 5: var getDelegate = (key) => 6: { 7: string s; 8: return _concurrentDictionary.TryGetValue(key, out s) ? s : null; 9: };                    Then, I set up a test harness that would simply ask the user for the number of concurrent Accessors to attempt to Access the cache (as specified in Accessor.Access() above) and then let them fly and see how long it took them all to complete.  Each of these tests was run with 10,000,000 cache accesses divided among the available Accessor instances.  All times are in milliseconds. 1: Dictionary with Mutex Locking 2: --------------------------------------------------- 3: Accessors Mostly Misses Mostly Hits 4: 1 7916 3285 5: 10 8293 3481 6: 100 8799 3532 7: 1000 8815 3584 8:  9:  10: Dictionary with ReaderWriterLockSlim Locking 11: --------------------------------------------------- 12: Accessors Mostly Misses Mostly Hits 13: 1 8445 3624 14: 10 11002 4119 15: 100 11076 3992 16: 1000 14794 4861 17:  18:  19: Concurrent Dictionary 20: --------------------------------------------------- 21: Accessors Mostly Misses Mostly Hits 22: 1 17443 3726 23: 10 14181 1897 24: 100 15141 1994 25: 1000 17209 2128 The first test I did across the board is the Mostly Misses category.  The mostly misses (more adds because data requested was not in the dictionary) shows an interesting trend.  In both cases the Dictionary with the simple mutex lock is much faster, and the ConcurrentDictionary is the slowest solution.  But this got me thinking, and a little research seemed to confirm it, maybe the ConcurrentDictionary is more optimized to concurrent "gets" than "adds".  So since the ratio of misses to hits were 2 to 1, I decided to reverse that and see the results. So I tweaked the data so that the number of keys were much smaller than the number of iterations to give me about a 2 to 1 ration of hits to misses (twice as likely to already find the item in the cache than to need to add it).  And yes, indeed here we see that the ConcurrentDictionary is indeed faster than the standard Dictionary here.  I have a strong feeling that as the ration of hits-to-misses gets higher and higher these number gets even better as well.  This makes sense since the ConcurrentDictionary is read-optimized. Also note that I tried the tests with capacity and concurrency hints on the ConcurrentDictionary but saw very little improvement, I think this is largely because on the 10,000,000 hit test it quickly ramped up to the correct capacity and concurrency and thus the impact was limited to the first few milliseconds of the run. So what does this tell us?  Well, as in all things, ConcurrentDictionary is not a panacea.  It won't solve all your woes and it shouldn't be the only Dictionary you ever use.  So when should we use each? Use System.Collections.Generic.Dictionary when: You need a single-threaded Dictionary (no locking needed). You need a multi-threaded Dictionary that is loaded only once at creation and never modified (no locking needed). You need a multi-threaded Dictionary to store items where writes are far more prevalent than reads (locking needed). And use System.Collections.Concurrent.ConcurrentDictionary when: You need a multi-threaded Dictionary where the writes are far more prevalent than reads. You need to be able to iterate over the collection without locking it even if its being modified. Both Dictionaries have their strong suits, I have a feeling this is just one where you need to know from design what you hope to use it for and make your decision based on that criteria.

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • Creating an ASP.NET report using Visual Studio 2010 - Part 1

    - by rajbk
    This tutorial walks you through creating an report based on the Northwind sample database. You will add a client report definition file (RDLC), create a dataset for the RDLC, define queries using LINQ to Entities, design the report and add a ReportViewer web control to render the report in a ASP.NET web page. The report will have a chart control. Different results will be generated by changing filter criteria. At the end of the walkthrough, you should have a UI like the following.  From the UI below, a user is able to view the product list and can see a chart with the sum of Unit price for a given category. They can filter by Category and Supplier. The drop downs will auto post back when the selection is changed.  This demo uses Visual Studio 2010 RTM. This post is split into three parts. The last part has the sample code attached. Creating an ASP.NET report using Visual Studio 2010 - Part 2 Creating an ASP.NET report using Visual Studio 2010 - Part 3   Lets start by creating a new ASP.NET empty web application called “NorthwindReports” Creating the Data Access Layer (DAL) Add a web form called index.aspx to the root directory. You do this by right clicking on the NorthwindReports web project and selecting “Add item..” . Create a folder called “DAL”. We will store all our data access methods and any data transfer objects in here.   Right click on the DAL folder and add a ADO.NET Entity data model called Northwind. Select “Generate from database” and click Next. Create a connection to your database containing the Northwind sample database and click Next.   From the table list, select Categories, Products and Suppliers and click next. Our Entity data model gets created and looks like this:    Adding data transfer objects Right click on the DAL folder and add a ProductViewModel. Add the following code. This class contains properties we need to render our report. public class ProductViewModel { public int? ProductID { get; set; } public string ProductName { get; set; } public System.Nullable<decimal> UnitPrice { get; set; } public string CategoryName { get; set; } public int? CategoryID { get; set; } public int? SupplierID { get; set; } public bool Discontinued { get; set; } } Add a SupplierViewModel class. This will be used to render the supplier DropDownlist. public class SupplierViewModel { public string CompanyName { get; set; } public int SupplierID { get; set; } } Add a CategoryViewModel class. public class CategoryViewModel { public string CategoryName { get; set; } public int CategoryID { get; set; } } Create an IProductRepository interface. This will contain the signatures of all the methods we need when accessing the entity model.  This step is not needed but follows the repository pattern. interface IProductRepository { IQueryable<Product> GetProducts(); IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID); IQueryable<SupplierViewModel> GetSuppliers(); IQueryable<CategoryViewModel> GetCategories(); } Create a ProductRepository class that implements the IProductReposity above. The methods available in this class are as follows: GetProducts – returns an IQueryable of all products. GetProductsProjected – returns an IQueryable of ProductViewModel. The method filters all the products based on SupplierId and CategoryId if any. It then projects the result into the ProductViewModel. GetSuppliers() – returns an IQueryable of all suppliers projected into a SupplierViewModel GetCategories() – returns an IQueryable of all categories projected into a CategoryViewModel  public class ProductRepository : IProductRepository { /// <summary> /// IQueryable of all Products /// </summary> /// <returns></returns> public IQueryable<Product> GetProducts() { var dataContext = new NorthwindEntities(); var products = from p in dataContext.Products select p; return products; }   /// <summary> /// IQueryable of Projects projected /// into the ProductViewModel class /// </summary> /// <returns></returns> public IQueryable<ProductViewModel> GetProductsProjected(int? supplierID, int? categoryID) { var projectedProducts = from p in GetProducts() select new ProductViewModel { ProductID = p.ProductID, ProductName = p.ProductName, UnitPrice = p.UnitPrice, CategoryName = p.Category.CategoryName, CategoryID = p.CategoryID, SupplierID = p.SupplierID, Discontinued = p.Discontinued }; // Filter on SupplierID if (supplierID.HasValue) { projectedProducts = projectedProducts.Where(a => a.SupplierID == supplierID); }   // Filter on CategoryID if (categoryID.HasValue) { projectedProducts = projectedProducts.Where(a => a.CategoryID == categoryID); }   return projectedProducts; }     public IQueryable<SupplierViewModel> GetSuppliers() { var dataContext = new NorthwindEntities(); var suppliers = from s in dataContext.Suppliers select new SupplierViewModel { SupplierID = s.SupplierID, CompanyName = s.CompanyName }; return suppliers; }   public IQueryable<CategoryViewModel> GetCategories() { var dataContext = new NorthwindEntities(); var categories = from c in dataContext.Categories select new CategoryViewModel { CategoryID = c.CategoryID, CategoryName = c.CategoryName }; return categories; } } Your solution explorer should look like the following. Build your project and make sure you don’t get any errors. In the next part, we will see how to create the client report definition file using the Report Wizard.   Creating an ASP.NET report using Visual Studio 2010 - Part 2

    Read the article

  • Validation in Silverlight

    - by Timmy Kokke
    Getting started with the basics Validation in Silverlight can get very complex pretty easy. The DataGrid control is the only control that does data validation automatically, but often you want to validate your own entry form. Values a user may enter in this form can be restricted by the customer and have to fit an exact fit to a list of requirements or you just want to prevent problems when saving the data to the database. Showing a message to the user when a value is entered is pretty straight forward as I’ll show you in the following example.     This (default) Silverlight textbox is data-bound to a simple data class. It has to be bound in “Two-way” mode to be sure the source value is updated when the target value changes. The INotifyPropertyChanged interface must be implemented by the data class to get the notification system to work. When the property changes a simple check is performed and when it doesn’t match some criteria an ValidationException is thrown. The ValidatesOnExceptions binding attribute is set to True to tell the textbox it should handle the thrown ValidationException. Let’s have a look at some code now. The xaml should contain something like below. The most important part is inside the binding. In this case the Text property is bound to the “Name” property in TwoWay mode. It is also told to validate on exceptions. This property is false by default.   <StackPanel Orientation="Horizontal"> <TextBox Width="150" x:Name="Name" Text="{Binding Path=Name, Mode=TwoWay, ValidatesOnExceptions=True}"/> <TextBlock Text="Name"/> </StackPanel>   The data class in this first example is a very simplified person class with only one property: string Name. The INotifyPropertyChanged interface is implemented and the PropertyChanged event is fired when the Name property changes. When the property changes a check is performed to see if the new string is null or empty. If this is the case a ValidationException is thrown explaining that the entered value is invalid.   public class PersonData:INotifyPropertyChanged { private string _name; public string Name { get { return _name; } set { if (_name != value) { if(string.IsNullOrEmpty(value)) throw new ValidationException("Name is required"); _name = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("Name")); } } } public event PropertyChangedEventHandler PropertyChanged=delegate { }; } The last thing that has to be done is letting binding an instance of the PersonData class to the DataContext of the control. This is done in the code behind file. public partial class Demo1 : UserControl { public Demo1() { InitializeComponent(); this.DataContext = new PersonData() {Name = "Johnny Walker"}; } }   Error Summary In many cases you would have more than one entry control. A summary of errors would be nice in such case. With a few changes to the xaml an error summary, like below, can be added.           First, add a namespace to the xaml so the control can be used. Add the following line to the header of the .xaml file. xmlns:Controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data.Input"   Next, add the control to the layout. To get the result as in the image showed earlier, add the control right above the StackPanel from the first example. It’s got a small margin to separate it from the textbox a little.   <Controls:ValidationSummary Margin="8"/>   The ValidationSummary control has to be notified that an ValidationException occurred. This can be done with a small change to the xaml too. Add the NotifyOnValidationError to the binding expression. By default this value is set to false, so nothing would be notified. Set the property to true to get it to work.   <TextBox Width="150" x:Name="Name" Text="{Binding Name, Mode=TwoWay, ValidatesOnExceptions=True, NotifyOnValidationError=True}"/>   Data annotation Validating data in the setter is one option, but not my personal favorite. It’s the easiest way if you have a single required value you want to check, but often you want to validate more. Besides, I don’t consider it best practice to write logic in setters. The way used by frameworks like WCF Ria Services is the use of attributes on the properties. Instead of throwing exceptions you have to call the static method ValidateProperty on the Validator class. This call stays always the same for a particular property, not even when you change the attributes on the property. To mark a property “Required” you can use the RequiredAttribute. This is what the Name property is going to look like:   [Required] public string Name { get { return _name; } set { if (_name != value) { Validator.ValidateProperty(value, new ValidationContext(this, null, null){ MemberName = "Name" }); _name = value; if (PropertyChanged != null) PropertyChanged(this, new PropertyChangedEventArgs("Name")); } } }   The ValidateProperty method takes the new value for the property and an instance of ValidationContext. The properties passed to the constructor of the ValidationContextclass are very straight forward. This part is the same every time. The only thing that changes is the MemberName property of the ValidationContext. Property has to hold the name of the property you want to validate. It’s the same value you provide the PropertyChangedEventArgs with. The System.ComponentModel.DataAnnotation contains eight different validation attributes including a base class to create your own. They are: RequiredAttribute Specifies that a value must be provided. RangeAttribute The provide value must fall in the specified range. RegularExpressionAttribute Validates is the value matches the regular expression. StringLengthAttribute Checks if the number of characters in a string falls between a minimum and maximum amount. CustomValidationAttribute Use a custom method to validate the value. DataTypeAttribute Specify a data type using an enum or a custom data type. EnumDataTypeAttribute Makes sure the value is found in a enum. ValidationAttribute A base class for custom validation attributes All of these will ensure that an validation exception is thrown, except the DataTypeAttribute. This attribute is used to provide some additional information about the property. You can use this information in your own code.   [Required] [Range(0,125,ErrorMessage = "Value is not a valid age")] public int Age {   It’s no problem to stack different validation attributes together. For example, when an Age is required and must fall in the range from 0 to 125:   [Required, StringLength(255,MinimumLength = 3)] public string Name {   Or in one row like this, for a required Name with at least 3 characters and a maximum of 255:   Delayed validation Having properties marked as required can be very useful. The only downside to the technique described earlier is that you have to change the value in order to get it validated. What if you start out with empty an empty entry form? All fields are empty and thus won’t be validated. With this small trick you can validate at the moment the user click the submit button.   <TextBox Width="150" x:Name="NameField" Text="{Binding Name, Mode=TwoWay, ValidatesOnExceptions=True, NotifyOnValidationError=True, UpdateSourceTrigger=Explicit}"/>   By default, when a TwoWay bound control looses focus the value is updated. When you added validation like I’ve shown you earlier, the value is validated. To overcome this, you have to tell the binding update explicitly by setting the UpdateSourceTrigger binding property to Explicit:   private void SubmitButtonClick(object sender, RoutedEventArgs e) { NameField.GetBindingExpression(TextBox.TextProperty).UpdateSource(); }   This way, the binding is in two direction but the source is only updated, thus validated, when you tell it to. In the code behind you have to call the UpdateSource method on the binding expression, which you can get from the TextBox.   Conclusion Data validation is something you’ll probably want on almost every entry form. I always thought it was hard to do, but it wasn’t. If you can throw an exception you can do validation. If you want to know anything more in depth about something I talked about in this article let me know. I might write an entire post to that.

    Read the article

  • SQL Server 2008 R2 Reporting Services - The Word is But a Stage (T-SQL Tuesday #006)

    - by smisner
    Host Michael Coles (blog|twitter) has selected LOB data as the topic for this month's T-SQL Tuesday, so I'll take this opportunity to post an overview of reporting with spatial data types. As part of my work with SQL Server 2008 R2 Reporting Services, I've been exploring the use of spatial data types in the new map data region. You can create a map using any of the following data sources: Map Gallery - a set of Shapefiles for the United States only that ships with Reporting Services ESRI Shapefile - a .shp file conforming to the Environmental Systems Research Institute, Inc. (ESRI) shapefile spatial data format SQL Server spatial data - a query that includes SQLGeography or SQLGeometry data types Rob Farley (blog|twitter) points out today in his T-SQL Tuesday post that using the SQL geography field is a preferable alternative to ESRI shapefiles for storing spatial data in SQL Server. So how do you get spatial data? If you don't already have a GIS application in-house, you can find a variety of sources. Here are a few to get you started: US Census Bureau Website, http://www.census.gov/geo/www/tiger/ Global Administrative Areas Spatial Database, http://biogeo.berkeley.edu/gadm/ Digital Chart of the World Data Server, http://www.maproom.psu.edu/dcw/ In a recent post by Pinal Dave (blog|twitter), you can find a link to free shapefiles for download and a tutorial for using Shape2SQL, a free tool to convert shapefiles into SQL Server data. In my post today, I'll show you how to use combine spatial data that describes boundaries with spatial data in AdventureWorks2008R2 that identifies stores locations to embed a map in a report. Preparing the spatial data First, I downloaded Shapefile data for the administrative boundaries in France and unzipped the data to a local folder. Then I used Shape2SQL to upload the data into a SQL Server database called Spatial. I'm not sure of the reason why, but I had to uncheck the option to create a spatial index to upload the data. Otherwise, the upload appeared to run successfully, but no table appeared in my database. The zip file that I downloaded contained three files, but I didn't know what was in them until I used Shape2SQL to upload the data into tables. Then I found that FRA_adm0 contains spatial data for the country of France, FRA_adm1 contains spatial data for each region, and FRA_adm2 contains spatial data for each department (a subdivision of region). Next I prepared my SQL query containing sales data for fictional stores selling Adventure Works products in France. The Person.Address table in the AdventureWorks2008R2 database (which you can download from Codeplex) contains a SpatialLocation column which I joined - along with several other tables - to the Sales.Customer and Sales.Store tables. I'll be able to superimpose this data on a map to see where these stores are located. I included the SQL script for this query (as well as the spatial data for France) in the downloadable project that I created for this post. Step 1: Using the Map Wizard to Create a Map of France You can build a map without using the wizard, but I find it's rather useful in this case. Whether you use Business Intelligence Development Studio (BIDS) or Report Builder 3.0, the map wizard is the same. I used BIDS so that I could create a project that includes all the files related to this post. To get started, I added an empty report template to the project and named it France Stores. Then I opened the Toolbox window and dragged the Map item to the report body which starts the wizard. Here are the steps to perform to create a map of France: On the Choose a source of spatial data page of the wizard, select SQL Server spatial query, and click Next. On the Choose a dataset with SQL Server spatial data page, select Add a new dataset with SQL Server spatial data. On the Choose a connection to a SQL Server spatial data source page, select New. In the Data Source Properties dialog box, on the General page, add a connecton string like this (changing your server name if necessary): Data Source=(local);Initial Catalog=Spatial Click OK and then click Next. On the Design a query page, add a query for the country shape, like this: select * from fra_adm1 Click Next. The map wizard reads the spatial data and renders it for you on the Choose spatial data and map view options page, as shown below. You have the option to add a Bing Maps layer which shows surrounding countries. Depending on the type of Bing Maps layer that you choose to add (from Road, Aerial, or Hybrid) and the zoom percentage you select, you can view city names and roads and various boundaries. To keep from cluttering my map, I'm going to omit the Bing Maps layer in this example, but I do recommend that you experiment with this feature. It's a nice integration feature. Use the + or - button to rexize the map as needed. (I used the + button to increase the size of the map until its edges were just inside the boundaries of the visible map area (which is called the viewport). You can eliminate the color scale and distance scale boxes that appear in the map area later. Select the Embed map data in this report for faster rendering. The spatial data won't be changing, so there's no need to leave it in the database. However, it does increase the size of the RDL. Click Next. On the Choose map visualization page, select Basic Map. We'll add data for visualization later. For now, we have just the outline of France to serve as the foundation layer for our map. Click Next, and then click Finish. Now click the color scale box in the lower left corner of the map, and press the Delete key to remove it. Then repeat to remove the distance scale box in the lower right corner of the map. Step 2: Add a Map Layer to an Existing Map The map data region allows you to add multiple layers. Each layer is associated with a different data set. Thus far, we have the spatial data that defines the regional boundaries in the first map layer. Now I'll add in another layer for the store locations by following these steps: If the Map Layers windows is not visible, click the report body, and then click twice anywhere on the map data region to display it. Click on the New Layer Wizard button in the Map layers window. And then we start over again with the process by choosing a spatial data source. Select SQL Server spatial query, and click Next. Select Add a new dataset with SQL Server spatial data, and click Next. Click New, add a connection string to the AdventureWorks2008R2 database, and click Next. Add a query with spatial data (like the one I included in the downloadable project), and click Next. The location data now appears as another layer on top of the regional map created earlier. Use the + button to resize the map again to fill as much of the viewport as possible without cutting off edges of the map. You might need to drag the map within the viewport to center it properly. Select Embed map data in this report, and click Next. On the Choose map visualization page, select Basic Marker Map, and click Next. On the Choose color theme and data visualization page, in the Marker drop-down list, change the marker to diamond. There's no particular reason for a diamond; I think it stands out a little better than a circle on this map. Clear the Single color map checkbox as another way to distinguish the markers from the map. You can of course create an analytical map instead, which would change the size and/or color of the markers according to criteria that you specify, such as sales volume of each store, but I'll save that exploration for another post on another day. Click Finish and then click Preview to see the rendered report. Et voilà...c'est fini. Yes, it's a very simple map at this point, but there are many other things you can do to enhance the map. I'll create a series of posts to explore the possibilities. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • How to Sync Specific Folders With Dropbox

    - by Matthew Guay
    Would you like to sync specific folders with Dropbox instead of automatically syncing all of your folders to all of your computers?  Here’s how using Selective Sync available in the latest Beta version of Dropbox. Dropbox is a great tool for keeping your important files synced between your computers, and we have covered many interesting things you can do with your Dropbox account.  But until now, there was no way to only sync certain folders with each computer; it was all or nothing.  This could be frustrating if you wanted to store large files from one computer but didn’t want them on a computer with a smaller hard drive.  The latest Beta version of Dropbox allows you to selectively choose which folders to sync between computers. Please Note: This feature is currently only available in the 0.8 beta version of Dropbox. Setup the new Beta Download the new beta version of Dropbox 0.8 (link below); choose the correct download for your system.  Run the installer as normal. It only took a couple seconds to install, though it made the taskbar disappear briefly at the end of the installation on our tests.  Strangely, the installer doesn’t let you know it’s finished installing; if you already had a previous version of Dropbox installed, it will simply start working from your system tray as before.  If this is a new installation of Dropbox, you will be asked to enter your Dropbox account info or create a new account.   Selectively Sync Folders By default, Dropbox will still sync all of your Dropbox folders to all of your computers.  Once this beta is installed, you can choose individual folders or subfolders you don’t want to sync.  Right-click the Dropbox icon in your system tray and select Preferences. Click the Advanced tab on the top, and then click the new Selective Sync button. Now uncheck any folders you don’t want to sync to this computer.  These folders will still exist on your other machines and in the Dropbox web interface, but they will not be downloaded to this computer. The default view only shows your top-level folders in your Dropbox account.  If you wish to sync certain folders but exclude their subfolders, click the Switch to Advanced View button.   Expand any folder and uncheck any subfolders you don’t want to sync.  Notice that the parent folder’s check box is filled now, showing that it is partially synced. Click OK when you’ve made the changes you want.  Dropbox will then make sure you know these folders will stop syncing to this computer; click OK again if you’re sure you don’t want to sync these folders.   Dropbox will cleanup your folder and remove the files and folders you don’t want synced.   Next time you open your Dropbox folder, you’ll notice that the folders we unchecked are no longer in this computer’s Dropbox folder.  They are still in our Dropbox online account, and on any other computers we’re syncing with. If you add a new folder with the same name as a folder you stopped syncing, you’ll notice a grey minus icon over the folder.  This folder will not sync with your other computers or your online Dropbox account. If you want to add these folders back to this computer’s Dropbox, just repeat the steps, this time checking the folders you want to sync.  If you have any folders that were not syncing before, their names will have (Selective Sync Conflict) added to the end, and will sync with all of your computers. Conclusion We’re excited that we can now choose exactly which folders we want synced on each computer.  Since everything is still synced with the online Dropbox, we can still access any of the folders from anywhere.  This makes your Dropbox much more versatile, and can help you keep the folders synced exactly the way you want. Links Download the new Dropbox 0.8.64 beta Signup for Dropbox Similar Articles Productive Geek Tips Add "My Dropbox" to Your Windows 7 Start MenuSync Your Pidgin Profile Across Multiple PCs with DropboxUser Guide to Dropbox Shared FoldersUse Any Folder For Your Ubuntu Desktop (Even a Dropbox Folder)Shut Down or Reboot a Solaris System TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 2010 World Cup Schedule Boot Snooze – Reboot and then Standby or Hibernate Customize Everything Related to Dates, Times, Currency and Measurement in Windows 7 Google Earth replacement Icon (Icons we like) Build Great Charts in Excel with Chart Advisor tinysong gives a shortened URL for you to post on Twitter (or anywhere)

    Read the article

  • RSS feeds in Orchard

    - by Bertrand Le Roy
    When we added RSS to Orchard, we wanted to make it easy for any module to expose any contents as a feed. We also wanted the rendering of the feed to be handled by Orchard in order to minimize the amount of work from the module developer. A typical example of such feed exposition is of course blog feeds. We have an IFeedManager interface for which you can get the built-in implementation through dependency injection. Look at the BlogController constructor for an example: public BlogController( IOrchardServices services, IBlogService blogService, IBlogSlugConstraint blogSlugConstraint, IFeedManager feedManager, RouteCollection routeCollection) { If you look a little further in that same controller, in the Item action, you’ll see a call to the Register method of the feed manager: _feedManager.Register(blog); This in reality is a call into an extension method that is specialized for blogs, but we could have made the two calls to the actual generic Register directly in the action instead, that is just an implementation detail: feedManager.Register(blog.Name, "rss", new RouteValueDictionary { { "containerid", blog.Id } }); feedManager.Register(blog.Name + " - Comments", "rss", new RouteValueDictionary { { "commentedoncontainer", blog.Id } }); What those two effective calls are doing is to register two feeds: one for the blog itself and one for the comments on the blog. For each call, the name of the feed is provided, then we have the type of feed (“rss”) and some values to be injected into the generic RSS route that will be used later to route the feed to the right providers. This is all you have to do to expose a new feed. If you’re only interested in exposing feeds, you can stop right there. If on the other hand you want to know what happens after that under the hood, carry on. What happens after that is that the feedmanager will take care of formatting the link tag for the feed (see FeedManager.GetRegisteredLinks). The GetRegisteredLinks method itself will be called from a specialized filter, FeedFilter. FeedFilter is an MVC filter and the event we’re interested in hooking into is OnResultExecuting, which happens after the controller action has returned an ActionResult and just before MVC executes that action result. In other words, our feed registration has already been called but the view is not yet rendered. Here’s the code for OnResultExecuting: model.Zones.AddAction("head:after", html => html.ViewContext.Writer.Write( _feedManager.GetRegisteredLinks(html))); This is another piece of code whose execution is differed. It is saying that whenever comes time to render the “head” zone, this code should be called right after. The code itself is rendering the link tags. As a result of all that, here’s what can be found in an Orchard blog’s head section: <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire"     href="/rss?containerid=5" /> <link rel="alternate" type="application/rss+xml"     title="Tales from the Evil Empire - Comments"     href="/rss?commentedoncontainer=5" /> The generic action that these two feeds point to is Index on FeedController. That controller has three important dependencies: an IFeedBuilderProvider, an IFeedQueryProvider and an IFeedItemProvider. Different implementations of these interfaces can provide different formats of feeds, such as RSS and Atom. The Match method enables each of the competing providers to provide a priority for themselves based on arbitrary criteria that can be found on the FeedContext. This means that a provider can be selected based not only on the desired format, but also on the nature of the objects being exposed as a feed or on something even more arbitrary such as the destination device (you could imagine for example giving shorter text only excerpts of posts on mobile devices, and full HTML on desktop). The key here is extensibility and dynamic competition and collaboration from unknown and loosely coupled parts. You’ll find this pattern pretty much everywhere in the Orchard architecture. The RssFeedBuilder implementation of IFeedBuilderProvider is also a regular controller with a Process action that builds a RssResult, which is itself a thin ActionResult wrapper around an XDocument. Let’s get back to the FeedController’s Index action. After having called into each known feed builder to get its priority on the currently requested feed, it will select the one with the highest priority. The next thing it needs to do is to actually fetch the data for the feed. This again is a collaborative effort from a priori unknown providers, the implementations of IFeedQueryProvider. There are several implementations by default in Orchard, the choice of which is again done through a Match method. ContainerFeedQuery for example chimes in when a “containerid” parameter is found in the context (see URL in the link tag above): public FeedQueryMatch Match(FeedContext context) { var containerIdValue = context.ValueProvider.GetValue("containerid"); if (containerIdValue == null) return null; return new FeedQueryMatch { FeedQuery = this, Priority = -5 }; } The actual work is done in the Execute method, which finds the right container content item in the Orchard database and adds elements for each of them. In other words, the feed query provider knows how to retrieve the list of content items to add to the feed. The last step is to translate each of the content items into feed entries, which is done by implementations of IFeedItemBuilder. There is no Match method this time. Instead, all providers are called with the collection of items (or more accurately with the FeedContext, but this contains the list of items, which is what’s relevant in most cases). Each provider can then choose to pick those items that it knows how to treat and transform them into the format requested. This enables the construction of heterogeneous feeds that expose content items of various types into a single feed. That will be extremely important when you’ll want to expose a single feed for all your site. So here are feeds in Orchard in a nutshell. The main point here is that there is a fair number of components involved, with some complexity in implementation in order to allow for extreme flexibility, but the part that you use to expose a new feed is extremely simple and light: declare that you want your content exposed as a feed and you’re done. There are cases where you’ll have to dive in and provide new implementations for some or all of the interfaces involved, but that requirement will only arise as needed. For example, you might need to create a new feed item builder to include your custom content type but that effort will be extremely focused on the specialized task at hand. The rest of the system won’t need to change. So what do you think?

    Read the article

  • SQL SERVER – 5 Tips for Improving Your Data with expressor Studio

    - by pinaldave
    It’s no secret that bad data leads to bad decisions and poor results.  However, how do you prevent dirty data from taking up residency in your data store?  Some might argue that it’s the responsibility of the person sending you the data.  While that may be true, in practice that will rarely hold up.  It doesn’t matter how many times you ask, you will get the data however they decide to provide it. So now you have bad data.  What constitutes bad data?  There are quite a few valid answers, for example: Invalid date values Inappropriate characters Wrong data Values that exceed a pre-set threshold While it is certainly possible to write your own scripts and custom SQL to identify and deal with these data anomalies, that effort often takes too long and becomes difficult to maintain.  Instead, leveraging an ETL tool like expressor Studio makes the data cleansing process much easier and faster.  Below are some tips for leveraging expressor to get your data into tip-top shape. Tip 1:     Build reusable data objects with embedded cleansing rules One of the new features in expressor Studio 3.2 is the ability to define constraints at the metadata level.  Using expressor’s concept of Semantic Types, you can define reusable data objects that have embedded logic such as constraints for dealing with dirty data.  Once defined, they can be saved as a shared atomic type and then re-applied to other data attributes in other schemas. As you can see in the figure above, I’ve defined a constraint on zip code.  I can then save the constraint rules I defined for zip code as a shared atomic type called zip_type for example.   The next time I get a different data source with a schema that also contains a zip code field, I can simply apply the shared atomic type (shown below) and the previously defined constraints will be automatically applied. Tip 2:     Unlock the power of regular expressions in Semantic Types Another powerful feature introduced in expressor Studio 3.2 is the option to use regular expressions as a constraint.   A regular expression is used to identify patterns within data.   The patterns could be something as simple as a date format or something much more complex such as a street address.  For example, I could define that a valid IP address should be made up of 4 numbers, each 0 to 255, and separated by a period.  So 192.168.23.123 might be a valid IP address whereas 888.777.0.123 would not be.   How can I account for this using regular expressions? A very simple regular expression that would look for any 4 sets of 3 digits separated by a period would be:  ^[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}$ Alternatively, the following would be the exact check for truly valid IP addresses as we had defined above:  ^(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])\.(25[0-5]|2[0-4][0-9]|1[0-9]{2}|[1-9]?[0-9])$ .  In expressor, we would enter this regular expression as a constraint like this: Here we select the corrective action to be ‘Escalate’, meaning that the expressor Dataflow operator will decide what to do.  Some of the options include rejecting the offending record, skipping it, or aborting the dataflow. Tip 3:     Email pattern expressions that might come in handy In the example schema that I am using, there’s a field for email.  Email addresses are often entered incorrectly because people are trying to avoid spam.  While there are a lot of different ways to define what constitutes a valid email address, a quick search online yields a couple of really useful regular expressions for validating email addresses: This one is short and sweet:  \b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b (Source: http://www.regular-expressions.info/) This one is more specific about which characters are allowed:  ^([a-zA-Z0-9_\-\.]+)@((\[[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.)|(([a-zA-Z0-9\-]+\.)+))([a-zA-Z]{2,4}|[0-9]{1,3})(\]?)$ (Source: http://regexlib.com/REDetails.aspx?regexp_id=26 ) Tip 4:     Reject “dirty data” for analysis or further processing Yet another feature introduced in expressor Studio 3.2 is the ability to reject records based on constraint violations.  To capture reject records on input, simply specify Reject Record in the Error Handling setting for the Read File operator.  Then attach a Write File operator to the reject port of the Read File operator as such: Next, in the Write File operator, you can configure the expressor operator in a similar way to the Read File.  The key difference would be that the schema needs to be derived from the upstream operator as shown below: Once configured, expressor will output rejected records to the file you specified.  In addition to the rejected records, expressor also captures some diagnostic information that will be helpful towards identifying why the record was rejected.  This makes diagnosing errors much easier! Tip 5:    Use a Filter or Transform after the initial cleansing to finish the job Sometimes you may want to predicate the data cleansing on a more complex set of conditions.  For example, I may only be interested in processing data containing males over the age of 25 in certain zip codes.  Using an expressor Filter operator, you can define the conditional logic which isolates the records of importance away from the others. Alternatively, the expressor Transform operator can be used to alter the input value via a user defined algorithm or transformation.  It also supports the use of conditional logic and data can be rejected based on constraint violations. However, the best tip I can leave you with is to not constrain your solution design approach – expressor operators can be combined in many different ways to achieve the desired results.  For example, in the expressor Dataflow below, I can post-process the reject data from the Filter which did not meet my pre-defined criteria and, if successful, Funnel it back into the flow so that it gets written to the target table. I continue to be impressed that expressor offers all this functionality as part of their FREE expressor Studio desktop ETL tool, which you can download from here.  Their Studio ETL tool is absolutely free and they are very open about saying that if you want to deploy their software on a dedicated Windows Server, you need to purchase their server software, whose pricing is posted on their website. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Company Review: Google Products

    Google, Inc offers an array of products and services to all of its end-users. However their search capabilities are the foundation for Google’s current success and their primary business focus. Currently, Google offers over twenty different search applications that allow users to search the internet for books, maps, videos, images, products and much more. Their product decisions have allowed users demands to be met while focusing on the free based model. This allows users to access Google data free of charge and indirectly gives Google a strong competitive advantage of other competitors along with the accuracy of the search results. According to Google, Inc, they offer the following types of searching capabilities: Alerts Get email updates on the topics of your choice Blog Search Find blogs on your favorite topics  Books Search the full text of books  Custom Search Create a customized search experience for your community  Desktop Search and personalize your computer  Dictionary Search for definitions of words and phrases Directory Search the web, organized by topic or category Earth Explore the world from your computer Finance Business info, news and interactive charts GOOG-411 Find and connect for free with businesses from your phone  Images Search for images on the web Maps View maps and directions News Search thousands of news stories Patent Search Search the full text of US Patents Product Search Search for stuff to buy Scholar Search scholarly papers Toolbar Add a search box to your browser Trends Explore past and present search trends Videos Search for videos on the web Web Search Search billions of web pages Web Search Features Find movies, music, stocks, books and more mapping Google’s free based business model is only one way it differentiates itself from its competition. There is also a strong focus on the accuracy of search results and the speed in which they are returned to the end-user. Quality function deployment (QFD) is a structured method used to help connect user needs to the design features of a project proposed to address those needs. This method is particularly useful in accounting for needs that are not easily articulated or precisely defined according to the U. S. Department of Transportation Federal Highway Administration. Due to the fact that QFD is so customer driven Google is always in a constant state of change in attempt to reengineer its search algorithms, and other dependant systems so that end-users requirements are constantly being met. Value engineering is a key example of this, Google is constantly trying to improve all aspects of its products, improve system maintainability, and system interoperability. Bridgefield Group defines value engineering as an organized methodology that identifies and selects the lowest lifecycle cost options in design, materials and processes that achieves the desired level of performance, reliability and customer satisfaction. In addition, it seeks to remove unnecessary costs in the above areas and is often a joint effort with cross-functional internal teams and relevant suppliers. Common issues that appear when developing large scale systems like Google’s search applications include modular design of a product and/or service and providing accurate value analysis. A design approach that adheres to four fundamental tenets of cohesiveness, encapsulation, self-containment, and high binding to design a system component as an independently operable unit subject to change is how the Open System Joint Task Force defines modular design. More specifically M. S. Schmaltz defines modular software design as having a large collection of statements strung together in one partition of in-line code; we segment or divide the statements into logical groups called modules. Each module performs one or two tasks, and then passes control to another module. By breaking up the code into "bite-sized chunks", so to speak, we are able to better control the flow of data and control. This is especially true in large software systems. Value analysis is a process to evaluate products and services based on effectiveness, safety, and cost. Value analysis involves assessing the quality as well as the cost of a product or service as defined by the Healthcare Financial Management Association.  “Operations Management deals with the design and management of products, processes, services and supply chains. It considers the acquisition, development, and utilization of resources that firms need to deliver the goods and services their clients want.” (MIT,2010) Google, Inc encourages an open environment between all employees, also known as Googlers. This is reinforced by a cross-section team or cross-functional teams comprised from multiple departments assigned to every project so that every department like marketing, finance, and quality assurance has input on every project. In addition, Google is known for their openness to new ideas regardless of the status or seniority of an employee. In fact, Google allows for 20% of an employee’s time can be devoted to developing new ideas and/or pet projects. HumTech.com defines a cross-functional team as a collection of people with varied levels of skills and experience brought together to accomplish a task. As the name implies, Cross-Functional Team members come from different organizational units. Cross-Functional Teams may be permanent or ad hoc. Google’s search application product strategy primarily focuses on mass customization. This is allows Google to create a base search application and allows results to be returned to the end-users quickly based on specific parameters and search settings. In addition, they also store the data that is returned in case other desire the same results based on other end-users supplying the same customized settings. This allows Google to appear to render search results in virtually real-time to the user while allowing for complete customization of the searching criteria. Greg Vogl, a professor at Uganda Martyrs University, defines mass customization as when a business gives its customers the opportunity to tailor its products or services to the customer's specifications. The IT staff at Google play a key role in ensuring that the search application’s product strategy is maintained simply because the IT staff designs, develops, and maintains all of their proprietary applications. In fact, they also maintain all network infrastructure to ensure that it is available to all end-users. References: http://www.google.com/intl/en/options/ http://ops.fhwa.dot.gov/freight/publications/ftat_user_guide/sec5.htm http://www.bridgefieldgroup.com/bridgefieldgroup/glos9.htm#V http://www.acq.osd.mil/osjtf/termsdef.html http://www.cise.ufl.edu/~mssz/Pascal-CGS2462/prog-dsn.html http://www.hfma.org/publications/business_caring_newsletter/exclusives/Supply+and+Inventory+Terms+Defined.htm http://mitsloan.mit.edu/omg/om-definition.php http://www.humtech.com/opm/grtl/ols/ols3.cfm http://www.gregvogl.net/courses/mis1/glossary.htm

    Read the article

< Previous Page | 144 145 146 147 148 149 150 151 152 153 154 155  | Next Page >