Search Results

Search found 1369 results on 55 pages for 'clause'.

Page 52/55 | < Previous Page | 48 49 50 51 52 53 54 55  | Next Page >

  • Noob LINQ - reading, filtering XML with XDocument

    - by user316117
    I'm just learning XDocument and LINQ queries. Here's some simple XML (which doesn't look formatted exactly right in this forum in my browser, but you get the idea . . .) <?xml version="1.0" encoding="utf-8"?> <quiz xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://www.example.com/name XMLFile2.xsd" title="MyQuiz1"> <q_a> <q_a_num>1</q_a_num> <q_>Here is question 1</q_> <_a>Here is the answer to 1</_a> </q_a> <q_a> <q_a_num>2</q_a_num> <q_>Here is question 2</q_> <_a>Here is the answer to 2</_a> </q_a> </quiz> I can iterate across all elements in my XML file and display their Name, Value, and NodeType in a ListBox like this, no problem: XDocument doc = XDocument.Load(sPath); IEnumerable<XElement> elems = doc.Descendants(); IEnumerable<XElement> elem_list = from elem in elems select elem; foreach (XElement element in elem_list) { String str0 = "Name = " + element.Name.ToString() + ", Value = " + element.Value.ToString() + ", Nodetype = " + element.NodeType.ToString(); System.Windows.Controls.Label strLabel = new System.Windows.Controls.Label(); strLabel.Content = str0; listBox1.Items.Add(strLabel); } ...but now I want to add a "where" clause to my query so that I only select elements with a certain name (e.g., "qa") but my element list comes up empty. I tried . . . IEnumerable<XElement> elem_list = from elem in elems where elem.Name.ToString() == "qa" select elem; Could someone please explain what I'm doing wrong? (and in general are there some good tips for debugging Queries?) Thanks in advance!

    Read the article

  • why is this rails association loading individually after an eager load?

    - by codeman73
    I'm trying to avoid the N+1 queries problem with eager loading, but it's not working. The associated models are still being loaded individually. Here are the relevant ActiveRecords and their relationships: class Player < ActiveRecord::Base has_one :tableau end Class Tableau < ActiveRecord::Base belongs_to :player has_many :tableau_cards has_many :deck_cards, :through => :tableau_cards end Class TableauCard < ActiveRecord::Base belongs_to :tableau belongs_to :deck_card, :include => :card end class DeckCard < ActiveRecord::Base belongs_to :card has_many :tableaus, :through => :tableau_cards end class Card < ActiveRecord::Base has_many :deck_cards end and the query I'm using is inside this method of Player: def tableau_contains(card_id) self.tableau.tableau_cards = TableauCard.find :all, :include => [ {:deck_card => (:card)}], :conditions => ['tableau_cards.tableau_id = ?', self.tableau.id] contains = false for tableau_card in self.tableau.tableau_cards # my logic here, looking at attributes of the Card model, with # tableau_card.deck_card.card; # individual loads of related Card models related to tableau_card are done here end return contains end Does it have to do with scope? This tableau_contains method is down a few method calls in a larger loop, where I originally tried doing the eager loading because there are several places where these same objects are looped through and examined. Then I eventually tried the code as it is above, with the load just before the loop, and I'm still seeing the individual SELECT queries for Card inside the tableau_cards loop in the log. I can see the eager-loading query with the IN clause just before the tableau_cards loop as well. EDIT: additional info below with the larger, outer loop Here's the larger loop. It is inside an observer on after_save def after_save(pa) @game = Game.find(turn.game_id, :include => :goals) @game.players = Player.find :all, :include => [ {:tableau => (:tableau_cards)}, :player_goals ], :conditions => ['players.game_id =?', @game.id] for player in @game.players player.tableau.tableau_cards = TableauCard.find :all, :include => [ {:deck_card => (:card)}], :conditions => ['tableau_cards.tableau_id = ?', player.tableau.id] if(player.tableau_contains(card)) ... end end end

    Read the article

  • Optimize date query for large child tables: GiST or GIN?

    - by Dave Jarvis
    Problem 72 child tables, each having a year index and a station index, are defined as follows: CREATE TABLE climate.measurement_12_013 ( -- Inherited from table climate.measurement_12_013: id bigint NOT NULL DEFAULT nextval('climate.measurement_id_seq'::regclass), -- Inherited from table climate.measurement_12_013: station_id integer NOT NULL, -- Inherited from table climate.measurement_12_013: taken date NOT NULL, -- Inherited from table climate.measurement_12_013: amount numeric(8,2) NOT NULL, -- Inherited from table climate.measurement_12_013: category_id smallint NOT NULL, -- Inherited from table climate.measurement_12_013: flag character varying(1) NOT NULL DEFAULT ' '::character varying, CONSTRAINT measurement_12_013_category_id_check CHECK (category_id = 7), CONSTRAINT measurement_12_013_taken_check CHECK (date_part('month'::text, taken)::integer = 12) ) INHERITS (climate.measurement) CREATE INDEX measurement_12_013_s_idx ON climate.measurement_12_013 USING btree (station_id); CREATE INDEX measurement_12_013_y_idx ON climate.measurement_12_013 USING btree (date_part('year'::text, taken)); (Foreign key constraints to be added later.) The following query runs abysmally slow due to a full table scan: SELECT count(1) AS measurements, avg(m.amount) AS amount FROM climate.measurement m WHERE m.station_id IN ( SELECT s.id FROM climate.station s, climate.city c WHERE -- For one city ... -- c.id = 5182 AND -- Where stations are within an elevation range ... -- s.elevation BETWEEN 0 AND 3000 AND 6371.009 * SQRT( POW(RADIANS(c.latitude_decimal - s.latitude_decimal), 2) + (COS(RADIANS(c.latitude_decimal + s.latitude_decimal) / 2) * POW(RADIANS(c.longitude_decimal - s.longitude_decimal), 2)) ) <= 50 ) AND -- -- Begin extracting the data from the database. -- -- The data before 1900 is shaky; insufficient after 2009. -- extract( YEAR FROM m.taken ) BETWEEN 1900 AND 2009 AND -- Whittled down by category ... -- m.category_id = 1 AND m.taken BETWEEN -- Start date. (extract( YEAR FROM m.taken )||'-01-01')::date AND -- End date. Calculated by checking to see if the end date wraps -- into the next year. If it does, then add 1 to the current year. -- (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date GROUP BY extract( YEAR FROM m.taken ) The sluggishness comes from this part of the query: m.taken BETWEEN /* Start date. */ (extract( YEAR FROM m.taken )||'-01-01')::date AND /* End date. Calculated by checking to see if the end date wraps into the next year. If it does, then add 1 to the current year. */ (cast(extract( YEAR FROM m.taken ) + greatest( -1 * sign( (extract( YEAR FROM m.taken )||'-12-31')::date - (extract( YEAR FROM m.taken )||'-01-01')::date ), 0 ) AS text)||'-12-31')::date The HashAggregate from the plan shows a cost of 10006220141.11, which is, I suspect, on the astronomically huge side. There is a full table scan on the measurement table (itself having neither data nor indexes) being performed. The table aggregates 237 million rows from its child tables. Question What is the proper way to index the dates to avoid full table scans? Options I have considered: GIN GiST Rewrite the WHERE clause Separate year_taken, month_taken, and day_taken columns to the tables What are your thoughts? Thank you!

    Read the article

  • Need help with joins in sqlalchemy

    - by Steve
    I'm new to Python, as well as SQL Alchemy, but not the underlying development and database concepts. I know what I want to do and how I'd do it manually, but I'm trying to learn how an ORM works. I have two tables, Images and Keywords. The Images table contains an id column that is its primary key, as well as some other metadata. The Keywords table contains only an id column (foreign key to Images) and a keyword column. I'm trying to properly declare this relationship using the declarative syntax, which I think I've done correctly. Base = declarative_base() class Keyword(Base): __tablename__ = 'Keywords' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, ForeignKey('Images.id', ondelete='CASCADE'), primary_key=True) keyword = Column(String(32), primary_key=True) class Image(Base): __tablename__ = 'Images' __table_args__ = {'mysql_engine' : 'InnoDB'} id = Column(Integer, primary_key=True, autoincrement=True) name = Column(String(256), nullable=False) keywords = relationship(Keyword, backref='image') This represents a many-to-many relationship. One image can have many keywords, and one keyword can relate back to many images. I want to do a keyword search of my images. I've tried the following with no luck. Conceptually this would've been nice, but I understand why it doesn't work. image = session.query(Image).filter(Image.keywords.contains('boy')) I keep getting errors about no foreign key relationship, which seems clearly defined to me. I saw something about making sure I get the right 'join', and I'm using 'from sqlalchemy.orm import join', but still no luck. image = session.query(Image).select_from(join(Image, Keyword)).\ filter(Keyword.keyword == 'boy') I added the specific join clause to the query to help it along, though as I understand it, I shouldn't have to do this. image = session.query(Image).select_from(join(Image, Keyword, Image.id==Keyword.id)).filter(Keyword.keyword == 'boy') So finally I switched tactics and tried querying the keywords and then using the backreference. However, when I try to use the '.images' iterating over the result, I get an error that the 'image' property doesn't exist, even though I did declare it as a backref. result = session.query(Keyword).filter(Keyword.keyword == 'boy').all() I want to be able to query a unique set of image matches on a set of keywords. I just can't guess my way to the syntax, and I've spent days reading the SQL Alchemy documentation trying to piece this out myself. I would very much appreciate anyone who can point out what I'm missing.

    Read the article

  • Using multiple aggregate functions in an algebraic expression in (ANSI) SQL statement

    - by morpheous
    I have the following aggregate functions (AGG FUNCs): foo(), foobar(), fredstats(), barneystats(). I want to know if I can use multiple AGG FUNCs in an algebraic expression. This may seem a strange/simplistic question for seasoned SQL developers - however, the but the reason I ask is that so far, all AGG FUNCs examples I have seen are of the simplistic variety e.g. max(salary) < 100, rather than using the AGG FUNCs in an expression which involves using multiple AGG FUNCs in an expression (like agg_func1() agg_func2()). The information below should help clarify further. Given tables with the following schemas: CREATE TABLE item (id int, length float, weight float); CREATE TABLE item_info (item_id, name varchar(32)); # Is it legal (ANSI) SQL to write queries of this format ? SELECT id, name, foo, foobar, fredstats FROM A, B (SELECT id, foo(123) as foo, foobar('red') as foobar, fredstats('weight') as fredstats FROM item GROUP BY id HAVING [ALGEBRAIC EXPRESSION] ORDER BY id AS A), item_info AS B WHERE item.id = B.id Where: ALGEBRAIC EXPRESSION is the type of expression that can be used in a WHERE clause - for example: ((foo(x) < foobar(y)) AND foobar(y) IN (1,2,3)) OR (fredstats(x) <> 0)) I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible. Assuming it is legal to include AGG FUNCS in the way I have done above, I'd like to know: Is there a more efficient way to write the above query ? Is there any way I can speed up the query in terms of a judicious choice of indexes on the tables item and item_info ? Is there a performance hit of using AGG FUNCs in an algebraic expression like I am (i.e. an expression involving the output of aggregate functions rather than constants? Can the expression also include 'scaled' AGG FUNC? (for example: 2*foo(123) < -3*foobar(456) ) - will scaling (i.e. multiplying an AGG FUNC by a number have an effect on performance?) How can I write the query above using INNER JOINS instead?

    Read the article

  • Is READ UNCOMMITTED / NOLOCK safe in this situation?

    - by Ben Challenor
    I know that snapshot isolation would fix this problem, but I'm wondering if NOLOCK is safe in this specific case so that I can avoid the overhead. I have a table that looks something like this: drop table Data create table Data ( Id BIGINT NOT NULL, Date BIGINT NOT NULL, Value BIGINT, constraint Cx primary key (Date, Id) ) create nonclustered index Ix on Data (Id, Date) There are no updates to the table, ever. Deletes can occur but they should never contend with the SELECT because they affect the other, older end of the table. Inserts are regular and page splits to the (Id, Date) index are extremely common. I have a deadlock situation between a standard INSERT and a SELECT that looks like this: select top 1 Date, Value from Data where Id = @p0 order by Date desc because the INSERT acquires a lock on Cx (Date, Id; Value) and then Ix (Id, Date), but the SELECT acquires a lock on Ix (Id, Date) and then Cx (Date, Id; Value). This is because the SELECT first seeks on Ix and then joins to a seek on Cx. Swapping the clustered and non-clustered index would break this cycle, but it is not an acceptable solution because it would introduce cycles with other (more complex) SELECTs. If I add NOLOCK to the SELECT, can it go wrong in this case? Can it return: More than one row, even though I asked for TOP 1? No rows, even though one exists and has been committed? Worst of all, a row that doesn't satisfy the WHERE clause? I've done a lot of reading about this online, but the only reproductions of over- or under-count anomalies I've seen (one, two) involve a scan. This involves only seeks. Jeff Atwood has a post about using NOLOCK that generated a good discussion. I was particularly interested in a comment by Rick Townsend: Secondly, if you read dirty data, the risk you run is of reading the entirely wrong row. For example, if your select reads an index to find your row, then the update changes the location of the rows (e.g.: due to a page split or an update to the clustered index), when your select goes to read the actual data row, it's either no longer there, or a different row altogether! Is this possible with inserts only, and no updates? If so, then I guess even my seeks on an insert-only table could be dangerous. Update: I'm trying to figure out how snapshot isolation works. It seems to be row-based, where transactions read the table (with no shared lock!), find the row they are interested in, and then see if they need to get an old version of the row from the version store in tempdb. But in my case, no row will have more than one version, so the version store seems rather pointless. And if the row was found with no shared lock, how is it different to just using NOLOCK?

    Read the article

  • Advanced SQL query with lots of joins

    - by lund.mikkel
    Hey fellow programmers Okay, first let me say that this is a hard one. I know the presentation may be a little long. But I how you'll bare with me and help me through anyway :D So I'm developing on an advanced search for bicycles. I've got a lot of tables I need to join to find all, let's say, red and brown bikes. One bike may come in more then one color! I've made this query for now: SELECT DISTINCT p.products_id, #simple product id products_name, #product name products_attributes_id, #color id pov.products_options_values_name #color name FROM products p LEFT JOIN products_description pd ON p.products_id = pd.products_id INNER JOIN products_attributes pa ON pa.products_id = p.products_id LEFT JOIN products_options_values pov ON pov.products_options_values_id = pa.options_values_id LEFT JOIN products_options_search pos ON pov.products_options_values_id = pos.products_options_values_id WHERE pos.products_options_search_id = 4 #code for red OR pos.products_options_search_id = 5 #code for brown My first concern is the many joins. The Products table mainly holds product id and it's image and the Products Description table holds more descriptive info such as name (and product ID of course). I then have the Products Options Values table which holds all the colors and their IDs. Products Options Search is containing the color IDs along with a color group ID (products_options_search_id). Red has the color group code 4 (brown is 5). The products and colors have a many-to-many relationship managed inside Products Attributes. So my question is first of all: Is it okay to make so many joins? Is i hurting the performance? Second: If a bike comes in both red and brown, it'll show up twice even though I use SELECT DISTINCT. Think this is because of the INNER JOIN. Is this possible to avoid and do I have to remove the doubles in my PHP code? Third: Bikes can be double colored (i.e. black and blue). This means that there are two rows for that bike. One where it says the color is black and one where is says its blue. (See second question). But if I replace the OR in the WHERE clause it removes both rows, because none of them fulfill the conditions - only the product. What is the workaround for that? I really hope you will and can help me. I'm a little desperate right now :D Regards Mikkel Lund

    Read the article

  • Why SQL2008 debugger would NOT step into a certain child stored procedure

    - by John Galt
    I'm encountering differences in T-SQL with SQL2008 (vs. SQL2000) that are leading me to dead-ends. I've verified that the technique of sharing #TEMP tables between a caller which CREATES the #TEMP and the child sProc which references it remain valid in SQL2008 See recent SO question. My core problem remains a critical "child" stored procedure that works fine in SQL2000 but fails in SQL2008 (i.e. a FROM clause in the child sProc is coded as: SELECT * FROM #AREAS A) despite #AREAS being created by the calling parent. Rather than post snippets of the code now, here is another symptom that may help you suggest something. I fired up the new debugger in SQL Mgmt Studio: EXEC dbo.AMS1 @S1='06',@C1='037',@StartDate='01/01/2008',@EndDate='07/31/2008',@Type=1,@ACReq = 1,@Output = 0,@NumofLines = 30,@SourceTable = 'P',@LoanPurposeCatg='P' This is a very large sProc and the key snippet that is weird is the following: **create table #Areas ( State char(2) , County char(3) , ZipCode char(5) NULL , CityName varchar(28) NULL , PData varchar(3) NULL , RData varchar(3) NULL , SMSA_CD varchar(10) NULL , TypeCounty varchar(50) , StateAbbr char(2) ) EXECUTE dbo.AMS_I_GetAreasV5 -- this child populates #Areas @SMSA = @SMSA , @S1 = @S1 , @C1 = @C1 , @Z1 = @Z1 , @SourceTable = @SourceTable , @CustomID = @CustomID , @UserName = @UserName , @CityName = @CityName , @Debug=0 EXECUTE dbo.AMS_I_GetAreas_FixAC -- this child cannot reference #Areas @StartDate = @StartDate , @EndDate = @EndDate , @SMSA_CD = @SMSA_CD , @S1 = @S1 , @C1 = @C1 , @Z1 = @Z1 , @CityName = @CityName , @CustomID = @CustomID , @Debug=0 -- continuation of the parent sProc** I can step through the execution of the parent stored procedure. When I get to the first child sproc above, I can either STEP INTO dbo.AMS_I_GetAreasV5 or STEP OVER its execution. When I arrive at the invocation of the 2nd child sProc - dbo.AMS_I_GetAreas_FixAC - I try to STEP INTO it (because that is where the problem statement is) and STEP INTO is ignored (i.e. treated like STEP OVER instead; yet I KNOW I pressed F11 not F10). It WAS executed however, because when control is returned to the statement after the EXECUTE, I click Continue to finish execution and the results windows shows the errors in the dbo.AMS_I_GetAreas_FixAC (i.e. the 2nd child) stored procedure. Is there a way to "pre-load" an sProc with the goal of setting a breakpoint on its entry so that I can pursue execution inside it? In summary, I wonder if the inability to step into a given child sproc might be related to the same inability of this particular child to reference a #temp created by its parent (caller).

    Read the article

  • XY-Scatter Chart In SSRS Won't Display Points

    - by Dalin Seivewright
    I'm a bit confused with this one. I have a Dataset with a BackupDate and a BackupTime as well as a BackupType. The BackupDate is comprised of 12 characters from the left of a datetime string within a table. The BackupTime is comprised of 8 characters from the right of that same datetime string. So for example: BackupDate would be 'December 12 2008' and the BackupTime would be '12:53PM.' I have added an XY-scatter chart to the report. I've added a 'series' value for the BackupType (so one can distinguish between a Full/Incr/Log backup). I've added a category value of BackupDate and set the Scale for the X-axis from the Min of BackupDate to the Max of BackupDate. I've then added an item to the Values with the Y variable set to BackupTime and the X variable set to BackupDate. The interval for the Y-axis is 12:00AM to 11:59PM and the formatting for the labels is 'hh:mmtt'. The BackupTime matches the format of the Y-axis. The BackupDate matches the format of the X-axis. 10 entries are retrieved by my Dataset and the Legend is properly populated by the BackupType field. No points are being plotted on the graph and no markers/pointers are shown if they are enabled. There should be a point on the graph for every point in time of each day there is a backup of a specific type. Am I missing something? Does anyone know of a good tutorial dealing specifically with XY-scatter graphs and using them in a way I intend? I am using the 2005 version of SSRS rather than the 2008 version. Screenshot of what my chart currently looks like: In case it could be dataset related: SELECT TOP (10) backup_type, LTRIM(RTRIM(LEFT(backup_finish_date, 12))) AS BackupDate, LTRIM(RTRIM(RIGHT(backup_finish_date, 8))) AS BackupTime FROM DBARepository.Backup_History As requested, here are the results of this query. There is a Where clause to constrain the results to a specific database of a specific server that was not included in the above SQL Query. Log Dec 26 2008 12:00PM Log Dec 27 2008 4:00AM Log Dec 27 2008 8:00AM Log Dec 27 2008 12:00PM Log Dec 27 2008 4:00PM Log Dec 27 2008 8:00PM Database Dec 27 2008 10:01PM Log Dec 28 2008 12:00AM Log Dec 28 2008 4:00AM Log Dec 28 2008 8:00AM

    Read the article

  • SQL Server CTE referred in self joins slow

    - by Kharlos Dominguez
    Hello, I have written a table-valued UDF that starts by a CTE to return a subset of the rows from a large table. There are several joins in the CTE. A couple of inner and one left join to other tables, which don't contain a lot of rows. The CTE has a where clause that returns the rows within a date range, in order to return only the rows needed. I'm then referencing this CTE in 4 self left joins, in order to build subtotals using different criterias. The query is quite complex but here is a simplified pseudo-version of it WITH DataCTE as ( SELECT [columns] FROM table INNER JOIN table2 ON [...] INNER JOIN table3 ON [...] LEFT JOIN table3 ON [...] ) SELECT [aggregates_columns of each subset] FROM DataCTE Main LEFT JOIN DataCTE BananasSubset ON [...] AND Product = 'Bananas' AND Quality = 100 LEFT JOIN DataCTE DamagedBananasSubset ON [...] AND Product = 'Bananas' AND Quality < 20 LEFT JOIN DataCTE MangosSubset ON [...] GROUP BY [ I have the feeling that SQL Server gets confused and calls the CTE for each self join, which seems confirmed by looking at the execution plan, although I confess not being an expert at reading those. I would have assumed SQL Server to be smart enough to only perform the data retrieval from the CTE only once, rather than do it several times. I have tried the same approach but rather than using a CTE to get the subset of the data, I used the same select query as in the CTE, but made it output to a temp table instead. The version referring the CTE version takes 40 seconds. The version referring the temp table takes between 1 and 2 seconds. Why isn't SQL Server smart enough to keep the CTE results in memory? I like CTEs, especially in this case as my UDF is a table-valued one, so it allowed me to keep everything in a single statement. To use a temp table, I would need to write a multi-statement table valued UDF, which I find a slightly less elegant solution. Did some of you had this kind of performance issues with CTE, and if so, how did you get them sorted? Thanks, Kharlos

    Read the article

  • Catching error caused by InitialContext.lookup

    - by Martin Schröder
    I'm developing a command line client (Java SE6) that now needs to talk to a Glassfish 2.1 server. The code for setting up this connection is try { final InitialContext context = new InitialContext(); final String ejbName = GeneratorCancelledRemote.class.getName(); generatorCancelled = (GeneratorCancelledRemote) context.lookup(ejbName); } catch (Throwable t) { System.err.println("--> Could not call server:"); t.printStackTrace(System.err); runWithOutEJB = true; } I'm now testing it without a running server and the client (when run from Eclipse 4.2) just bombs with 31.10.2012 10:40:09 com.sun.corba.ee.impl.transport.SocketOrChannelConnectionImpl WARNUNG: "IOP00410201: (COMM_FAILURE) Connection failure: socketType: IIOP_CLEAR_TEXT; hostname: localhost; port: 3700" org.omg.CORBA.COMM_FAILURE: vmcid: SUN minor code: 201 completed: No at com.sun.corba.ee.impl.logging.ORBUtilSystemException.connectFailure(ORBUtilSystemException.java:2783) at com.sun.corba.ee.impl.logging.ORBUtilSystemException.connectFailure(ORBUtilSystemException.java:2804) at com.sun.corba.ee.impl.transport.SocketOrChannelConnectionImpl.(SocketOrChannelConnectionImpl.java:261) at com.sun.corba.ee.impl.transport.SocketOrChannelConnectionImpl.(SocketOrChannelConnectionImpl.java:274) at com.sun.corba.ee.impl.transport.SocketOrChannelContactInfoImpl.createConnection(SocketOrChannelContactInfoImpl.java:130) at com.sun.corba.ee.impl.protocol.CorbaClientRequestDispatcherImpl.beginRequest(CorbaClientRequestDispatcherImpl.java:192) at com.sun.corba.ee.impl.protocol.CorbaClientDelegateImpl.request(CorbaClientDelegateImpl.java:184) at com.sun.corba.ee.impl.protocol.CorbaClientDelegateImpl.is_a(CorbaClientDelegateImpl.java:328) at org.omg.CORBA.portable.ObjectImpl._is_a(ObjectImpl.java:112) at org.omg.CosNaming.NamingContextHelper.narrow(NamingContextHelper.java:69) at com.sun.enterprise.naming.SerialContext.narrowProvider(SerialContext.java:134) at com.sun.enterprise.naming.SerialContext.getCachedProvider(SerialContext.java:259) at com.sun.enterprise.naming.SerialContext.getRemoteProvider(SerialContext.java:204) at com.sun.enterprise.naming.SerialContext.getProvider(SerialContext.java:159) at com.sun.enterprise.naming.SerialContext.lookup(SerialContext.java:409) at javax.naming.InitialContext.lookup(InitialContext.java:392) at com.werkii.latex.generator.Generator.main(Generator.java:344) Caused by: java.lang.RuntimeException: java.net.ConnectException: Connection refused: connect at com.sun.enterprise.iiop.IIOPSSLSocketFactory.createSocket(IIOPSSLSocketFactory.java:347) at com.sun.corba.ee.impl.transport.SocketOrChannelConnectionImpl.(SocketOrChannelConnectionImpl.java:244) ... 14 more Caused by: java.net.ConnectException: Connection refused: connect at sun.nio.ch.Net.connect(Native Method) at sun.nio.ch.SocketChannelImpl.connect(SocketChannelImpl.java:532) at com.sun.corba.ee.impl.orbutil.ORBUtility.openSocketChannel(ORBUtility.java:105) at com.sun.enterprise.iiop.IIOPSSLSocketFactory.createSocket(IIOPSSLSocketFactory.java:332) ... 15 more It's o.k. for now (while I'm still in development) that it bombs, but it does this repeatedly and the catch clause is never reached (even though I'm catching Throwable) - the message is not printed. So how can I handle connection errors during lookup in my program?

    Read the article

  • GHC.Generics and Type Families

    - by jberryman
    This is a question related to my module here, and is simplified a bit. It's also related to this previous question, in which I oversimplified my problem and didn't get the answer I was looking for. I hope this isn't too specific, and please change the title if you can think if a better one. Background My module uses a concurrent chan, split into a read side and write side. I use a special class with an associated type synonym to support polymorphic channel "joins": {-# LANGUAGE TypeFamilies #-} class Sources s where type Joined s newJoinedChan :: IO (s, Messages (Joined s)) -- NOT EXPORTED --output and input sides of channel: data Messages a -- NOT EXPORTED data Mailbox a instance Sources (Mailbox a) where type Joined (Mailbox a) = a newJoinedChan = undefined instance (Sources a, Sources b)=> Sources (a,b) where type Joined (a,b) = (Joined a, Joined b) newJoinedChan = undefined -- and so on for tuples of 3,4,5... The code above allows us to do this kind of thing: example = do (mb , msgsA) <- newJoinedChan ((mb1, mb2), msgsB) <- newJoinedChan --say that: msgsA, msgsB :: Messages (Int,Int) --and: mb :: Mailbox (Int,Int) -- mb1,mb2 :: Mailbox Int We have a recursive action called a Behavior that we can run on the messages we pull out of the "read" end of the channel: newtype Behavior a = Behavior (a -> IO (Behavior a)) runBehaviorOn :: Behavior a -> Messages a -> IO () -- NOT EXPORTED This would allow us to run a Behavior (Int,Int) on either of msgsA or msgsB, where in the second case both Ints in the tuple it receives actually came through separate Mailboxes. This is all tied together for the user in the exposed spawn function spawn :: (Sources s) => Behavior (Joined s) -> IO s ...which calls newJoinedChan and runBehaviorOn, and returns the input Sources. What I'd like to do I'd like users to be able to create a Behavior of arbitrary product type (not just tuples) , so for instance we could run a Behavior (Pair Int Int) on the example Messages above. I'd like to do this with GHC.Generics while still having a polymorphic Sources, but can't manage to make it work. spawn :: (Sources s, Generic (Joined s), Rep (Joined s) ~ ??) => Behavior (Joined s) -> IO s The parts of the above example that are actually exposed in the API are the fst of the newJoinedChan action, and Behaviors, so an acceptable solution can modify one or all of runBehaviorOn or the snd of newJoinedChan. I'll also be extending the API above to support sums (not implemented yet) like Behavior (Either a b) so I hoped GHC.Generics would work for me. Questions Is there a way I can extend the API above to support arbitrary Generic a=> Behavior a? If not using GHC's Generics, are there other ways I can get the API I want with minimal end-user pain (i.e. they just have to add a deriving clause to their type)?

    Read the article

  • using a Singleton to pass credentials in a multi-tenant application a code smell?

    - by Hans Gruber
    Currently working on a multi-tenant application that employs Shared DB/Shared Schema approach. IOW, we enforce tenant data segregation by defining a TenantID column on all tables. By convention, all SQL reads/writes must include a Where TenantID = '?' clause. Not an ideal solution, but hindsight is 20/20. Anyway, since virtually every page/workflow in our app must display tenant specific data, I made the (poor) decision at the project's outset to employ a Singleton to encapsulate the current user credentials (i.e. TenantID and UserID). My thinking at the time was that I didn't want to add a TenantID parameter to each and every method signature in my Data layer. Here's what the basic pseudo-code looks like: public class UserIdentity { public UserIdentity(int tenantID, int userID) { TenantID = tenantID; UserID = userID; } public int TenantID { get; private set; } public int UserID { get; private set; } } public class AuthenticationModule : IHttpModule { public void Init(HttpApplication context) { context.AuthenticateRequest += new EventHandler(context_AuthenticateRequest); } private void context_AuthenticateRequest(object sender, EventArgs e) { var userIdentity = _authenticationService.AuthenticateUser(sender); if (userIdentity == null) { //authentication failed, so redirect to login page, etc } else { //put the userIdentity into the HttpContext object so that //its only valid for the lifetime of a single request HttpContext.Current.Items["UserIdentity"] = userIdentity; } } } public static class CurrentUser { public static UserIdentity Instance { get { return HttpContext.Current.Items["UserIdentity"]; } } } public class WidgetRepository: IWidgetRepository{ public IEnumerable<Widget> ListWidgets(){ var tenantId = CurrentUser.Instance.TenantID; //call sproc with tenantId parameter } } As you can see, there are several code smells here. This is a singleton, so it's already not unit test friendly. On top of that you have a very tight-coupling between CurrentUser and the HttpContext object. By extension, this also means that I have a reference to System.Web in my Data layer (shudder). I want to pay down some technical debt this sprint by getting rid of this singleton for the reasons mentioned above. I have a few thoughts on what an better implementation might be, but if anyone has any guidance or lessons learned they could share, I would be much obliged.

    Read the article

  • HTTP Handlers in Win2008/IIS7

    - by Keith Barrows
    We are migrating our web sites from Win2003/IIS6 to Win2008/IIS7. Our .NET code is in a WAP form with compiled binaries. I do dev work on a Win7/IIS7 box so had to learn early how to set up HTTP Handlers in this newer environment. What I have that has worked fine on my box is: <system.webServer> <handlers> <remove name="WebServiceHandlerFactory-Integrated" /> <remove name="ScriptHandlerFactory" /> <remove name="ScriptHandlerFactoryAppServices" /> <remove name="ScriptResource" /> <add name="RivWorks" path="*.riv" verb="*" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" preCondition="classicMode,runtimeVersionv2.0,bitness32" /> <add name="RivWorks2" path="*.riv2" verb="*" modules="IsapiModule" scriptProcessor="C:\Windows\Microsoft.NET\Framework\v2.0.50727\aspnet_isapi.dll" resourceType="Unspecified" requireAccess="Script" preCondition="classicMode,runtimeVersionv2.0,bitness32" /> <add name="ScriptHandlerFactory" verb="*" path="*.asmx" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptHandlerFactoryAppServices" verb="*" path="*_AppService.axd" preCondition="integratedMode" type="System.Web.Script.Services.ScriptHandlerFactory, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> <add name="ScriptResource" verb="GET,HEAD" path="ScriptResource.axd" preCondition="integratedMode" type="System.Web.Handlers.ScriptResourceHandler, System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35" /> </handlers> All I am getting on the new web site when I try to call into the *.riv handler is: 404 - File or directory not found. The resource you are looking for might have been removed, had its name changed, or is temporarily unavailable. OK. You see interesting things when writing out these questions. Our server is setup in integrated mode and runs on a x64 system. So, I changed the precondition clause to: preCondition="integratedMode,runtimeVersionv2.0,bitness64" Now I get this instead: 500 - Internal server error. There is a problem with the resource you are looking for, and it cannot be displayed. Any ideas of what I should be doing, where I should be looking? TIA

    Read the article

  • Cakephp beforeFind() - How do I add a JOIN condition AFTER the belongsTo association is added?

    - by michael
    I'm in Model-beforeFind($queryData), trying to add a JOIN condition to the queryData on a model which has belongsTo associations. Unfortunately, the new JOIN references a table in the belongsTo association, so it must appear AFTER the belongsTo in the query. Here is my Tagged-belongsTo association: app\plugins\tags\models\tagged.php (line 192) Array ( [Tag] => Array ( [className] => Tag [foreignKey] => tag_id [conditions] => [fields] => [order] => [counterCache] => ) [Group] => Array ( [className] => Group [foreignKey] => foreign_key [conditions] => Array ( [Tagged.model] => Group ) [fields] => [order] => [counterCache] => ) ) Here is the JOIN added in Tagged-beforeFind(), notice that the belongsTo joins have not yet been added: app\plugins\tags\models\tagged.php (line 194) Array ( [conditions] => Array ( [Tag.keyname] => europe ) [fields] => Array ( [0] => DISTINCT Group.* [1] => GroupPermission.* ) [joins] => Array ( [0] => Array ( [table] => permissions [alias] => GroupPermission [foreignKey] => [type] => INNER [conditions] => Array ( [GroupPermission.model] => Group [0] => GroupPermission.foreignId = Group.id [or] => Array ( ... ) ) ) ) [limit] => [offset] => [order] => Array ( [0] => ) [page] => 1 [group] => [callbacks] => 1 [by] => europe [model] => Group ) When I check the output, it fails with "1054: Unknown column 'Group.id' in 'on clause'" because the Permissions join appeared BEFORE the Groups join. SELECT DISTINCT `Group`.*, `GroupPermission`.* FROM `tagged` AS `Tagged` INNER JOIN permissions AS `GroupPermission` ON (`GroupPermission`.`model` = 'Group' AND `GroupPermission`.`foreignId` = `Group`.`id` AND (...)) LEFT JOIN `tags` AS `Tag` ON (`Tagged`.`tag_id` = `Tag`.`id`) LEFT JOIN `groups` AS `Group` ON (`Tagged`.`foreign_key` = `Group`.`id` AND `Tagged`.`model` = 'Group') WHERE `Tag`.`keyname` = 'europe' But this SQL (with Permissions joined moved to the end) works fine: SELECT DISTINCT `Group`.*, `GroupPermission`.* FROM `tagged` AS `Tagged` LEFT JOIN `tags` AS `Tag` ON (`Tagged`.`tag_id` = `Tag`.`id`) LEFT JOIN `groups` AS `Group` ON (`Tagged`.`foreign_key` = `Group`.`id` AND `Tagged`.`model` = 'Group') INNER JOIN permissions AS `GroupPermission` ON (`GroupPermission`.`model` = 'Group' AND `GroupPermission`.`foreignId` = `Group`.`id` AND (...)) WHERE `Tag`.`keyname` = 'europe' How do I add my join in beforeFind() after the belongsTo join?

    Read the article

  • Issues querying Access '07 database in C#

    - by Kye
    I'm doing a .NET unit as part of my studies. I've only just started, with a lecturer that as kinda failed to give me the most solid foundation with .NET, so excuse the noobishness. I'm making a pretty simple and generic database-driven application. I'm using C# and I'm accessing a Microsoft Access 2007 database. I've put the database-ish stuff in its own class with the methods just spitting out OleDbDataAdapters that I use for committing. I feed any methods which preform a query a DataSet object from the main program, which is where I'm keeping the data (multiple tables in the db). I've made a very generic private method that I use to perform SQL SELECT queries and have some public methods wrapping that method to get products, orders.etc (it's a generic retail database). The generic method uses a separate Connect method to actually make the connection, and it is as follows: private static OleDbConnection Connect() { OleDbConnection conn = new OleDbConnection( @"Provider=Microsoft.ACE.OLEDB.12.0; Data Source=C:\Temp\db.accdb"); return conn; } The generic method is as follows: private static OleDbDataAdapter GenericSelectQuery( DataSet ds, string namedTable, String selectString) { OleDbCommand oleCommand = new OleDbCommand(); OleDbConnection conn = Connect(); oleCommand.CommandText = selectString; oleCommand.Connection = conn; oleCommand.CommandType = CommandType.Text; OleDbDataAdapter adapter = new OleDbDataAdapter(); adapter.SelectCommand = oleCommand; adapter.MissingSchemaAction = MissingSchemaAction.AddWithKey; adapter.Fill(ds, namedTable); return adapter; } The wrapper methods just pass along the DataSet that they received from the main program, the namedtable string is the name of the table in the dataset, and you pass in the query you wish to make. It doesn't matter which query I give it (even something simple like SELECT * FROM TableName) I still get thrown an OleDbException, stating that there was en error with the FROM clause of the query. I've just resorted to building the queries with Access, but there's still no use. Obviously there's something wrong with my code, which wouldn't actually surprise me. Here are some wrapper methods I'm using. public static OleDbDataAdapter GetOrderLines(DataSet ds) { OleDbDataAdapter adapter = GenericSelectQuery( ds, "orderlines", "SELECT OrderLine.* FROM OrderLine;"); return adapter; } They all look the same, it's just the SQL that changes.

    Read the article

  • MySQL search for user and their roles

    - by Jenkz
    I am re-writing the SQL which lets a user search for any other user on our site and also shows their roles. An an example, roles can be "Writer", "Editor", "Publisher". Each role links a User to a Publication. Users can take multiple roles within multiple publications. Example table setup: "users" : user_id, firstname, lastname "publications" : publication_id, name "link_writers" : user_id, publication_id "link_editors" : user_id, publication_id Current psuedo SQL: SELECT * FROM ( (SELECT user_id FROM users WHERE firstname LIKE '%Jenkz%') UNION (SELECT user_id FROM users WHERE lastname LIKE '%Jenkz%') ) AS dt JOIN (ROLES STATEMENT) AS roles ON roles.user_id = dt.user_id At the moment my roles statement is: SELECT dt2.user_id, dt2.publication_id, dt.role FROM ( (SELECT 'writer' AS role, link_writers.user_id, link_writers.publication_id FROM link_writers) UNION (SELECT 'editor' AS role, link_editors.user_id, link_editors.publication_id FROM link_editors) ) AS dt2 The reason for wrapping the roles statement in UNION clauses is that some roles are more complex and require a table join to find the publication_id and user_id. As an example "publishers" might be linked accross two tables "link_publishers": user_id, publisher_group_id "link_publisher_groups": publisher_group_id, publication_id So in that instance, the query forming part of my UNION would be: SELECT 'publisher' AS role, link_publishers.user_id, link_publisher_groups.publication_id FROM link_publishers JOIN link_publisher_groups ON lpg.group_id = lp.group_id I'm pretty confident that my table setup is good (I was warned off the one-table-for-all system when researching the layout). My problem is that there are now 100,000 rows in the users table and upto 70,000 rows in each of the link tables. Initial lookup in the users table is fast, but the joining really slows things down. How can I only join on the relevant roles? -------------------------- EDIT ---------------------------------- Explain above (open in a new window to see full resolution). The bottom bit in red, is the "WHERE firstname LIKE '%Jenkz%'" the third row searches WHERE CONCAT(firstname, ' ', lastname) LIKE '%Jenkz%'. Hence the large row count, but I think this is unavoidable, unless there is a way to put an index accross concatenated fields? The green bit at the top just shows the total rows scanned from the ROLES STATEMENT. You can then see each individual UNION clause (#6 - #12) which all show a large number of rows. Some of the indexes are normal, some are unique. It seems that MySQL isn't optimizing to use the dt.user_id as a comparison for the internal of the UNION statements. Is there any way to force this behaviour? Please note that my real setup is not publications and writers but "webmasters", "players", "teams" etc.

    Read the article

  • Is there anything wrong with having a few private methods exposing IQueryable<T> and all public meth

    - by Nate Bross
    I'm wondering if there is a better way to approach this problem. The objective is to reuse code. Let’s say that I have a Linq-To-SQL datacontext and I've written a "repository style" class that wraps up a lot of the methods I need and exposes IQueryables. (so far, no problem). Now, I'm building a service layer to sit on top of this repository, many of the service methods will be 1<-1 with repository methods, but some will not. I think a code sample will illustrate this better than words. public class ServiceLayer { MyClassDataContext context; IMyRepository rpo; public ServiceLayer(MyClassDataContext ctx) { context = ctx; rpo = new MyRepository(context); } private IQueryable<MyClass> ReadAllMyClass() { // pretend there is some complex business logic here // and maybe some filtering of the current users access to "all" // that I don't want to repeat in all of the public methods that access // MyClass objects. return rpo.ReadAllMyClass(); } public IEnumerable<MyClass> GetAllMyClass() { // call private IQueryable so we can do attional "in-database" processing return this.ReadAllMyClass(); } public IEnumerable<MyClass> GetActiveMyClass() { // call private IQueryable so we can do attional "in-database" processing // in this case a .Where() clause return this.ReadAllMyClass().Where(mc => mc.IsActive.Equals(true)); } #region "Something my class MAY need to do in the future" private IQueryable<MyOtherTable> ReadAllMyOtherTable() { // there could be additional constrains which define // "all" for the current user return context.MyOtherTable; } public IEnumerable<MyOtherTable> GetAllMyOtherTable() { return this.ReadAllMyOtherTable(); } public IEnumerable<MyOtherTable> GetInactiveOtherTable() { return this.ReadAllMyOtherTable.Where(ot => ot.IsActive.Equals(false)); } #endregion } This particular case is not the best illustration, since I could just call the repository directly in the GetActiveMyClass method, but let’s presume that my private IQueryable does some extra processing and business logic that I don't want to replicate in both of my public methods. Is that a bad way to attack an issue like this? I don't see it being so complex that it really warrants building a third class to sit between the repository and the service class, but I'd like to get your thoughts. For the sake of argument, lets presume two additional things. This service is going to be exposed through WCF and that each of these public IEnumerable methods will be calling a .Select(m => m.ToViewModel()) on each returned collection which will convert it to a POCO for serialization. The service will eventually need to expose some context.SomeOtherTable which wont be wrapped into the repository.

    Read the article

  • Execution plan issue requires reset on SQL Server 2005, how to determine cause?

    - by Tony Brandner
    We have a web application that delivers training to thousands of corporate students running on top of SQL Server 2005. Recently, we started seeing that a single specific query in the application went from 1 second to about 30 seconds in terms of execution time. The application started throwing timeouts in that area. Our first thought was that we may have incorrect indexes, so we reviewed the tables and indexes. However, similar queries elsewhere in the application also run quickly. Reviewing the indexes showed us that they were configured as expected. We were able to narrow it down to a single query, not a stored procedure. Running this query in SQL Studio also runs quickly. We tried running the application in a different server environment. So a different web server with the same query, parameters and database. The query still ran slow. The query is a fairly large one related to determining a student's current list of training. It includes joins and left joins on a dozen tables and subqueries. A few of the tables are fairly large (hundreds of thousands of rows) and some of the other tables are small lookup tables. The query uses a grouping clause and a few where conditions. A few of the tables are quite active and the contents change often but the volume of added rows doesn't seem extreme. These symptoms led us to consider the execution plan. First off, as soon as we reset the execution plan cache with the SQL command 'DBCC FREEPROCCACHE', the problem went away. Unfortunately, the problem started to reoccur within a few days. The problem has continued to plague us for awhile now. It's usually the same query, but we did appear to see the problem occur in another single query recently. It happens enough to be a nuisance. We're having a heck of a time trying to fix it since we can't reproduce it in any other environment other than production. I have downloaded the High Availability guide from Red Gate and I read up more on execution plans. I hope to run the profiler on the live server, but I'm a bit concerned about impact. I would like to ask - what is the best way to figure out what is triggering this problem? Has anyone else seen this same issue?

    Read the article

  • Benchmark of Java Try/Catch Block

    - by hectorg87
    I know that going into a catch block has some significance cost when executing a program, however, I was wondering if entering a try{} block also had any impact so I started looking for an answer in google with many opinions, but no benchmarking at all. Some answers I found were: Java try/catch performance, is it recommended to keep what is inside the try clause to a minimum? Try Catch Performance Java Java try catch blocks However they didn't answer my question with facts, so I decided to try it for myself. Here's what I did. I have a csv file with this format: host;ip;number;date;status;email;uid;name;lastname;promo_code; where everything after status is optional and will not even have the corresponding ; , so when parsing a validation has to be done to see if the value is there, here's where the try/catch issue came to my mind. The current code that in inherited in my company does this: StringTokenizer st=new StringTokenizer(line,";"); String host = st.nextToken(); String ip = st.nextToken(); String number = st.nextToken(); String date = st.nextToken(); String status = st.nextToken(); String email = ""; try{ email = st.nextToken(); }catch(NoSuchElementException e){ email = ""; } and it repeats what it's done for email with uid, name, lastname and promo_code. and I changed everything to: if(st.hasMoreTokens()){ email = st.nextToken(); } and in fact it performs faster. When parsing a file that doesn't have the optional columns. Here are the average times: --- Trying:122 milliseconds --- Checking:33 milliseconds however, here's what confused me and the reason I'm asking: When running the example with values for the optional columns in all 8000 lines of the CSV, the if() version still performs better than the try/catch version, so my question is Does really the try block does not have any performance impact on my code? The average times for this example are: --- Trying:105 milliseconds --- Checking:43 milliseconds Can somebody explain what's going on here? Thanks a lot

    Read the article

  • Trouble with abstract generic methods

    - by DanM
    Let's say I have a class library that defines a couple entity interfaces: public interface ISomeEntity { /* ... */ } public interface ISomeOtherEntity { /* ... */ } This library also defines an IRepository interface: public interface IRepository<TEntity> { /* ... */ } And finally, the library has an abstract class called RepositorySourceBase (see below), which the main project needs to implement. The goal of this class is to allow the base class to grab new Repository objects at runtime. Because certain repositories are needed (in this example a repository for ISomeEntity and ISomeOtherEntity), I'm trying to write generic overloads of the GetNew<TEntity>() method. The following implementation doesn't compile (the second GetNew() method gets flagged as "already defined" even though the where clause is different), but it gets at what I'm trying to accomplish: public abstract class RepositorySourceBase // This doesn't work! { public abstract Repository<TEntity> GetNew<TEntity>() where TEntity : SomeEntity; public abstract Repository<TEntity> GetNew<TEntity>() where TEntity : SomeOtherEntity; } The intended usage of this class would be something like this: public class RepositorySourceTester { public RepositorySourceTester(RepositorySourceBase repositorySource) { var someRepository = repositorySource.GetNew<ISomeEntity>(); var someOtherRepository = repositorySource.GetNew<ISomeOtherEntity>(); } } Meanwhile, over in my main project (which references the library project), I have implementations of ISomeEntity and ISomeOtherEntity: public class SomeEntity : ISomeEntity { /* ... */ } public class SomeOtherEntity : ISomeOtherEntity { /* ... */ } The main project also has an implementation for IRepository<TEntity>: public class Repository<TEntity> : IRepository<TEntity> { public Repository(string message) { } } And most importantly, it has an implementation of the abstract RepositorySourceBase: public class RepositorySource : RepositorySourceBase { public override Repository<SomeEntity> GetNew() { return new Repository<SomeEntity>("stuff only I know"); } public override Repository<SomeOtherEntity> GetNew() { return new Repository<SomeOtherEntity>("other stuff only I know"); } } Just as with RepositorySourceBase, the second GetNew() method gets flagged as "already defined". So, C# basically think I'm repeating the same method because there's no way to distinguish the methods from parameters, but if you look at my usage example, it seems like I should be able to distinguish which GetNew() I want from the generic type parameter, e.g, <ISomeEntity> or <ISomeOtherEntity>. What do I need to do to get this to work?

    Read the article

  • chained selects with one table

    - by Owen
    I know I am going about this in an unusual way, every tut I've seen uses multiple tables, but due to the way the rest of my site works I would like to create a chained select which operates using a single table. My table structure is: ---------------------- |Catagory|SubCategory| |01|cat1 |subcat1 | |02|cat1 |subcat2 | |03|cat2 |subcat1 | |04|cat2 |subcat2 | ---------------------- The code I have so far looks like: <tr> <td class="shadow"><strong>Category:</strong> </td> <td class="shadow"> <select id="category" name="category" style="width:150px"> <option selected="selected" value="<?php echo $category ?>"><?php echo $category?></option> <?php include('connect.php'); $result1 = mysql_query("SELECT DISTINCT category FROM categories") or die(mysql_error()); while($row = mysql_fetch_array( $result1 )) { $category = $row['category']; echo "<option value='". $row['category'] ."'>". $row['category'] ."</option>"; } ?> </select> </td> </tr> <tr> <td class="shadow"><strong>Sub Category:</strong> </td> <td class="shadow"> <select id="sub_catgory" name="sub_category" style="width:150px;"> <option selected="selected" value="<?php echo $sub_category ?>"><?php echo $sub_category ?></option> <?php include('connect.php'); $result2 = mysql_query("SELECT sub_category FROM categories WHERE ") or die(mysql_error()); while($row = mysql_fetch_array ($result2 )){ echo "<option value='" . $row['sub_category'] . "'>". $row['sub_category']. "</option>"; } ?> </select> </td> </tr> On the second select I am not sure how to state the WHERE clause. I need it to display the subcategories which have the same category as selected in the first select. PART 2 how would I include AJAX in this to preload the data so i don't need to refresh the page. Could someone either help me finish what I've started here or point me to a good tutorial. thanks

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • Creating packages in code - Package Configurations

    Continuing my theme of building various types of packages in code, this example shows how to building a package with package configurations. Incidentally it shows you how to add a variable, and a connection too. It covers the five most common configurations: Configuration File Indirect Configuration File SQL Server Indirect SQL Server Environment Variable  For a general overview try the SQL Server Books Online Package Configurations topic. The sample uses a a simple helper function ApplyConfig to create or update a configuration, although in the example we will only ever create. The most useful knowledge is the configuration string (Configuration.ConfigurationString) that you need to set. Configuration Type Configuration String Description Configuration File The full path and file name of an XML configuration file. The file can contain one or more configuration and includes the target path and new value to set. Indirect Configuration File An environment variable the value of which contains full path and file name of an XML configuration file as per the Configuration File type described above. SQL Server A three part configuration string, with each part being quote delimited and separated by a semi-colon. -- The first part is the connection manager name. The connection tells you which server and database to look for the configuration table. -- The second part is the name of the configuration table. The table is of a standard format, use the Package Configuration Wizard to help create an example, or see the sample script files below. The table contains one or more rows or configuration items each with a target path and new value. -- The third and final part is the optional filter name. A configuration table can contain multiple configurations, and the filter is  literal value that can be used to group items together and act as a filter clause when configurations are being read. If you do not need a filter, just leave the value empty. Indirect SQL Server An environment variable the value of which is the three part configuration string as per the SQL Server type described above. Environment Variable An environment variable the value of which is the value to set in the package. This is slightly different to the other examples as the configuration definition in the package also includes the target information. In our ApplyConfig function this is the only example that actually supplies a target value for the Configuration.PackagePath property. The path is an XPath style path for the target property, \Package.Variables[User::Variable].Properties[Value], the equivalent of which can be seen in the screenshot below, with the object being our variable called Variable, and the property to set is the Value property of that variable object. The configurations as seen when opening the generated package in BIDS: The sample code creates the package, adds a variable and connection manager, enables configurations, and then adds our example configurations. The package is then saved to disk, useful for checking the package and testing, before finally executing, just to prove it is valid. There are some external resources used here, namely some environment variables and a table, see below for more details. namespace Konesans.Dts.Samples { using System; using Microsoft.SqlServer.Dts.Runtime; public class PackageConfigurations { public void CreatePackage() { // Create a new package Package package = new Package(); package.Name = "ConfigurationSample"; // Add a variable, the target for our configurations package.Variables.Add("Variable", false, "User", 0); // Add a connection, for SQL configurations // Add the SQL OLE-DB connection ConnectionManager connectionManagerOleDb = package.Connections.Add("OLEDB"); connectionManagerOleDb.Name = "SQLConnection"; connectionManagerOleDb.ConnectionString = "Provider=SQLOLEDB.1;Data Source=(local);Initial Catalog=master;Integrated Security=SSPI;"; // Add our example configurations, first must enable package setting package.EnableConfigurations = true; // Direct configuration file, see sample file this.ApplyConfig(package, "Configuration File", DTSConfigurationType.ConfigFile, "C:\\Temp\\XmlConfig.dtsConfig", string.Empty); // Indirect configuration file, the emvironment variable XmlConfigFileEnvironmentVariable // contains the path to the configuration file, e.g. C:\Temp\XmlConfig.dtsConfig this.ApplyConfig(package, "Indirect Configuration File", DTSConfigurationType.IConfigFile, "XmlConfigFileEnvironmentVariable", string.Empty); // Direct SQL Server configuration, uses the SQLConnection package connection to read // configurations from the [dbo].[SSIS Configurations] table, with a filter of "SampleFilter" this.ApplyConfig(package, "SQL Server", DTSConfigurationType.SqlServer, "\"SQLConnection\";\"[dbo].[SSIS Configurations]\";\"SampleFilter\";", string.Empty); // Indirect SQL Server configuration, the environment variable "SQLServerEnvironmentVariable" // contains the configuration string e.g. "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; this.ApplyConfig(package, "Indirect SQL Server", DTSConfigurationType.ISqlServer, "SQLServerEnvironmentVariable", string.Empty); // Direct environment variable, the value of the EnvironmentVariable environment variable is // applied to the target property, the value of the "User::Variable" package variable this.ApplyConfig(package, "EnvironmentVariable", DTSConfigurationType.EnvVariable, "EnvironmentVariable", "\\Package.Variables[User::Variable].Properties[Value]"); #if DEBUG // Save package to disk, DEBUG only new Application().SaveToXml(String.Format(@"C:\Temp\{0}.dtsx", package.Name), package, null); Console.WriteLine(@"C:\Temp\{0}.dtsx", package.Name); #endif // Execute package package.Execute(); // Basic check for warnings foreach (DtsWarning warning in package.Warnings) { Console.WriteLine("WarningCode : {0}", warning.WarningCode); Console.WriteLine(" SubComponent : {0}", warning.SubComponent); Console.WriteLine(" Description : {0}", warning.Description); Console.WriteLine(); } // Basic check for errors foreach (DtsError error in package.Errors) { Console.WriteLine("ErrorCode : {0}", error.ErrorCode); Console.WriteLine(" SubComponent : {0}", error.SubComponent); Console.WriteLine(" Description : {0}", error.Description); Console.WriteLine(); } package.Dispose(); } /// <summary> /// Add or update an package configuration. /// </summary> /// <param name="package">The package.</param> /// <param name="name">The configuration name.</param> /// <param name="type">The type of configuration</param> /// <param name="setting">The configuration setting.</param> /// <param name="target">The target of the configuration, leave blank if not required.</param> internal void ApplyConfig(Package package, string name, DTSConfigurationType type, string setting, string target) { Configurations configurations = package.Configurations; Configuration configuration; if (configurations.Contains(name)) { configuration = configurations[name]; } else { configuration = configurations.Add(); } configuration.Name = name; configuration.ConfigurationType = type; configuration.ConfigurationString = setting; configuration.PackagePath = target; } } } The following table lists the environment variables required for the full example to work along with some sample values. Variable Sample value EnvironmentVariable 1 SQLServerEnvironmentVariable "SQLConnection";"[dbo].[SSIS Configurations]";"SampleFilter"; XmlConfigFileEnvironmentVariable C:\Temp\XmlConfig.dtsConfig Sample code, package and configuration file. ConfigurationApplication.cs ConfigurationSample.dtsx XmlConfig.dtsConfig

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

< Previous Page | 48 49 50 51 52 53 54 55  | Next Page >