Search Results

Search found 17537 results on 702 pages for 'doctrine query'.

Page 670/702 | < Previous Page | 666 667 668 669 670 671 672 673 674 675 676 677  | Next Page >

  • Nhibernate one-to-many with table per subclass

    - by Wayne
    I am customizing N2CMS's database structure, and met with an issue. The two classes are listed below. public class Customer : ContentItem { public IList<License> Licenses { get; set; } } public class License : ContentItem { public Customer Customer { get; set; } } The nhibernate mapping are as follows. <class name="N2.ContentItem,N2" table="n2item"> <cache usage="read-write" /> <id name="ID" column="ID" type="Int32" unsaved-value="0" access="property"> <generator class="native" /> </id> <discriminator column="Type" type="String" /> </class> <subclass name="My.Customer,My" extends="N2.ContentItem,N2" discriminator-value="Customer"> <join table="Customer"> <key column="ItemID" /> <bag name="Licenses" generic="true" inverse="true"> <key column="CustomerID" /> <one-to-many class="My.License,My"/> </bag> </join> </subclass> <subclass name="My.License,My" extends="N2.ContentItem,N2" discriminator-value="License"> <join table="License" fetch="select"> <key column="ItemID" /> <many-to-one name="Customer" column="CustomerID" class="My.Customer,My" not-null="false" /> </join> </subclass> Then, when get an instance of Customer, the customer.Licenses is always empty, but actually there are licenses in the database for the customer. When I check the nhibernate log file, I find that the SQL query is like: SELECT licenses0_.CustomerID as CustomerID1_, licenses0_.ID as ID1_, licenses0_.ID as ID2_0_, licenses0_1_.CustomerID as CustomerID7_0_, FROM n2item licenses0_ inner join License licenses0_1_ on licenses0_.ID = licenses0_1_.ItemID WHERE licenses0_.CustomerID = 12 /* @p0 */ It seems that nhibernate believes that the CustomerID is in the 'n2item' table. I don't know why, but to make it work, I think the SQL should be something like this. SELECT licenses0_.ID as ID1_, licenses0_.ID as ID2_0_, licenses0_1_.CustomerID as CustomerID7_0_, FROM n2item licenses0_ inner join License licenses0_1_ on licenses0_.ID = licenses0_1_.ItemID WHERE licenses0_1_.CustomerID = 12 /* @p0 */ Could any one point out what's wrong with my mappings? And how can I get the correct licenses of one customer? Thanks in advance.

    Read the article

  • Semi-complex aggregate select statement confusion

    - by Ian Henry
    Alright, this problem is a little complicated, so bear with me. I have a table full of data. One of the table columns is an EntryDate. There can be multiple entries per day. However, I want to select all rows that are the latest entry on their respective days, and I want to select all the columns of said table. One of the columns is a unique identifier column, but it is not the primary key (I have no idea why it's there; this is a pretty old system). For purposes of demonstration, say the table looks like this: create table ExampleTable ( ID int identity(1,1) not null, PersonID int not null, StoreID int not null, Data1 int not null, Data2 int not null, EntryDate datetime not null ) The primary key is on PersonID and StoreID, which logically defines uniqueness. Now, like I said, I want to select all the rows that are the latest entries on that particular day (for each Person-Store combination). This is pretty easy: --Figure 1 select PersonID, StoreID, max(EntryDate) from ExampleTable group by PersonID, StoreID, dbo.dayof(EntryDate) Where dbo.dayof() is a simple function that strips the time component from a datetime. However, doing this loses the rest of the columns! I can't simply include the other columns, because then I'd have to group by them, which would produce the wrong results (especially since ID is unique). I have found a dirty hack that will do what I want, but there must be a better way -- here's my current solution: select cast(null as int) as ID, PersonID, StoreID, cast(null as int) as Data1, cast(null as int) as Data2, max(EntryDate) as EntryDate into #StagingTable from ExampleTable group by PersonID, StoreID, dbo.dayof(EntryDate) update Target set ID = Source.ID, Data1 = Source.Data1, Data2 = Source.Data2, from #StagingTable as Target inner join ExampleTable as Source on Source.PersonID = Target.PersonID and Source.StoreID = Target.StoreID and Source.EntryDate = Target.EntryDate This gets me the correct data in #StagingTable but, well, look at it! Creating a table with null values, then doing an update to get the values back -- surely there's a better way to do this? A single statement that will get me all the values the first time? It is my belief that the correct join on that original select (Figure 1) would do the trick, like a self-join or something... but how do you do that with the group by clause? I cannot find the right syntax to make the query execute. I am pretty new with SQL, so it's likely that I'm missing something obvious. Any suggestions? (Working in T-SQL, if it makes any difference)

    Read the article

  • Android: How can i access email addresses in android

    - by Maxood
    I have the following code through which i am able to retrieve phone numbers. Somehow , i am not able to retrieve email addresses by using android.provider.Contacts.People API. Any ideas? import android.app.AlertDialog; import android.app.ExpandableListActivity; import android.content.ContentUris; import android.content.Context; import android.database.Cursor; import android.net.Uri; import android.os.Bundle; import android.provider.Contacts.People; import android.view.View; import android.widget.ExpandableListAdapter; import android.widget.SimpleCursorTreeAdapter; import android.widget.TextView; import android.widget.ExpandableListView.OnChildClickListener; public class ShowContacts extends ExpandableListActivity implements OnChildClickListener { private int mGroupIdColumnIndex; private String mPhoneNumberProjection[] = new String[] { People.Phones._ID, People.NUMBER // CHANGE HERE }; private ExpandableListAdapter mAdapter; @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); // Query for people Cursor groupCursor = managedQuery(People.CONTENT_URI, new String[] {People._ID, People.NAME}, null, null, null); // Cache the ID column index mGroupIdColumnIndex = groupCursor.getColumnIndexOrThrow(People._ID); // Set up our adapter mAdapter = new MyExpandableListAdapter(groupCursor, this, android.R.layout.simple_expandable_list_item_1, android.R.layout.simple_expandable_list_item_1, new String[] {People.NAME}, // Name for group layouts new int[] {android.R.id.text1}, new String[] {People.NUMBER}, // AND CHANGE HERE new int[] {android.R.id.text1}); setListAdapter(mAdapter); } public class MyExpandableListAdapter extends SimpleCursorTreeAdapter { public MyExpandableListAdapter(Cursor cursor, Context context, int groupLayout, int childLayout, String[] groupFrom, int[] groupTo, String[] childrenFrom, int[] childrenTo) { super(context, cursor, groupLayout, groupFrom, groupTo, childLayout, childrenFrom, childrenTo); } @Override protected Cursor getChildrenCursor(Cursor groupCursor) { // Given the group, we return a cursor for all the children within that group // Return a cursor that points to this contact's phone numbers Uri.Builder builder = People.CONTENT_URI.buildUpon(); ContentUris.appendId(builder, groupCursor.getLong(mGroupIdColumnIndex)); builder.appendEncodedPath(People.Phones.CONTENT_DIRECTORY); Uri phoneNumbersUri = builder.build(); return managedQuery(phoneNumbersUri, mPhoneNumberProjection, null, null, null); } } @Override public boolean onChildClick(android.widget.ExpandableListView parent, View v, int groupPosition, int childPosition, long id) { AlertDialog dialog = new AlertDialog.Builder(ShowContacts.this) .setMessage(((TextView) v).getText().toString()) .setPositiveButton("OK", null).create(); dialog.show(); return true; } }

    Read the article

  • ASP.Net Web Farm Monitoring

    - by cisellis
    I am looking for suggestions on doing some simple monitoring of an ASP.Net web farm as close to real-time as possible. The objectives of this question are to: Identify the best way to monitor several Windows Server production boxes during short (minutes long) period of ridiculous load Receive near-real-time feedback on a few key metrics about each box. These are simple metrics available via WMI such as CPU, Memory and Disk Paging. I am defining my time constraints as soon as possible with 120 seconds delayed being the absolute upper limit. Monitor whether any given box is up (with "up" being defined as responding web requests in a reasonable amount of time) Here are more details, things I've tried, etc. I am not interested in logging. We have logging solutions in place. I have looked at solutions such as ELMAH which don't provide much in the way of hardware monitoring and are not visible across an entire web farm. ASP.Net Health Monitoring is too broad, focuses too much on logging and is not acceptable for deep analysis. We are on Amazon Web Services and we have looked into CloudWatch. It looks great but messages in the forum indicate that the metrics are often a few minutes behind, with one thread citing 2 minutes as the absolute soonest you could expect to receive the feedback. This would be good to have for later analysis but does not help us real-time Stuff like JetBrains profiler is good for testing but again, not helpful during real-time monitoring. The closest out-of-box solution I've seen is Nagios which is free and appears to measure key indicators on any kind of box, including Windows. However, it appears to require a Linux box to run itself on and a good deal of manual configuration. I'd prefer to not spend my time mining config files and then be up a creek when it fails in production since Linux is not my main (or even secondary) environment. Are there any out-of-box solutions that I am missing? Obviously a windows-based solution that is easy to setup is ideal. I don't require many bells and whistles. In the absence of an out-of-box solution, it seems easy for me to write something simple to handle what I need. I've been thinking a simple client-server setup where the server requests a few WMI metrics from each client over http and sticks them in a database. We could then monitor the metrics via a query or a dashboard or something. If the client doesn't respond, it's effectively down. Any problems with this, best practices, or other ideas? Thanks for any help/feedback.

    Read the article

  • xslt broken: pattern does not match

    - by krisvandenbergh
    I'm trying to query an xml file using the following xslt: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:rdfs="http://www.w3.org/2000/01/rdf-schema#" xmlns:bpmn="http://dkm.fbk.eu/index.php/BPMN_Ontology"> <!-- Participants --> <xsl:template match="/"> <html> <body> <table> <xsl:for-each select="Package/Participants/Participant"> <tr> <td><xsl:value-of select="ParticipantType" /></td> <td><xsl:value-of select="Description" /></td> </tr> </xsl:for-each> </table> </body> </html> </xsl:template> </xsl:stylesheet> Here's the contents of the xml file: <?xml version="1.0" encoding="utf-8"?> <?xml-stylesheet type="text/xsl" href="xpdl2bpmn.xsl"?> <Package xmlns="http://www.wfmc.org/2008/XPDL2.1" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xsd="http://www.w3.org/2001/XMLSchema" Id="25ffcb89-a9bf-40bc-8f50-e5afe58abda0" Name="1 price setting" OnlyOneProcess="false"> <PackageHeader> <XPDLVersion>2.1</XPDLVersion> <Vendor>BizAgi Process Modeler.</Vendor> <Created>2010-04-24T10:49:45.3442528+02:00</Created> <Description>1 price setting</Description> <Documentation /> </PackageHeader> <RedefinableHeader> <Author /> <Version /> <Countrykey>CO</Countrykey> </RedefinableHeader> <ExternalPackages /> <Participants> <Participant Id="008af9a6-fdc0-45e6-af3f-984c3e220e03" Name="customer"> <ParticipantType Type="RESOURCE" /> <Description /> </Participant> <Participant Id="1d2fd8b4-eb88-479b-9c1d-7fe6c45b910e" Name="clerk"> <ParticipantType Type="ROLE" /> <Description /> </Participant> </Participants> </Package> Despite, the simple pattern, the foreach doesn't work. What is wrong with Package/Participants/Participant ? What do I miss here? Thanks a lot!

    Read the article

  • How to find the data key on CheckedChanged event of checkbox in ListView in ASP.NET?

    - by subodh
    I am using a list view inside that in item template i am using a label and a checkbox. I want that whenever user clicks on the check box the value should be updated in a table.i am using a datakeys in listview.on the basis of datakey value should be updated in the table. Query is: string updateQuery = "UPDATE [TABLE] SET [COLUMN] = " + Convert.ToInt32(chk.Checked) + " WHERE PK_ID =" + dataKey + " ";` also i want some help in displaying the result as it is inside the table.means if the value for column in table for a particular pkid is 1 then the checkbox shoul be checked. Here is the code snippet: <asp:ListView ID="lvFocusArea" runat="server" DataKeyNames="PK_ID" OnItemDataBound="lvFocusArea_ItemDataBound"> <LayoutTemplate> <table border="0" cellpadding="1" width="400px"> <tr style="background-color: #E5E5FE"> <th align="left"> Focus Area </th> <th> Is Current Focused </th> </tr> <tr id="itemPlaceholder" runat="server"> </tr> </table> </LayoutTemplate> <ItemTemplate> <tr> <td width="80%"> <asp:Label ID="lblFocusArea" runat="server" Text=""><%#Eval("FOCUS_AREA_NAME") %></asp:Label> </td> <td align="center" width="20%"> <asp:CheckBox ID="chkFocusArea" runat="server" OnCheckedChanged="chkFocusArea_CheckedChanged" AutoPostBack="true" /> </td> </tr> </ItemTemplate> <AlternatingItemTemplate> <tr style="background-color: #EFEFEF"> <td> <asp:Label ID="lblFocusArea" runat="server" Text=""><%#Eval("FOCUS_AREA_NAME") %></asp:Label> </td> <td align="center"> <asp:CheckBox ID="chkFocusArea" runat="server" OnCheckedChanged="chkFocusArea_CheckedChanged" AutoPostBack="true" /> </td> </tr> </AlternatingItemTemplate> <SelectedItemTemplate> <td> item selected </td> </SelectedItemTemplate> </asp:ListView> Help me.

    Read the article

  • While loops within while loops and output php?

    - by NovacTownCode
    I have a while loop to show the replies for a post on my website. The value for parentID used in the query is $post['postID'] which is an array of details for the post being viewed. As seen below it outputs the following (each subject is a link to view the full post) $q = $dbc -> prepare("SELECT * FROM boardposts WHERE parentID = ?"); $q -> execute(array($post['postID'])); while ($postReply = $q -> fetch(PDO::FETCH_ASSOC)) { echo '<p><a href="http://www.example.com/boards?topic=' . $_GET['topic'] . '&amp;view=' . $postReply['postID'] . '">' . $postReply['subject'] . '</a>'; } This currently outputs something along the lines of, Replies To This Message: subject 1 subject 2 subject 3 subject 4 Is there a way in which I can also in the list include replies to the replies, something along the lines of, Replies To This Message: subject 1          subject 1 reply          subject 1 reply                  subject 1 reply reply subject 2 subject 3          subject 3 reply          subject 3 reply                  subject 3 reply reply subject 4          subject 4 reply subject 5 subject 6          subject 6 reply                  subject 4 reply reply I understand all the indenting can be with css, but am stuck as to how to pull the data from the mysql database and in the correct order, I tried while loops within while loops, but that involved queries inside while loops, which is bad! Thanks for your input!

    Read the article

  • Retrieve EF4 POCOs using WCF REST services starter kit

    - by muruge
    I am using WCF REST service (GET method) to retrieve my EF4 POCOs. The service seem to work just fine. When I query the uri in my browser I get the results as expected. In my client application I am trying to use WCF REST Starter Kit's HTTPExtension method - ReadAsDataContract() to convert the result back into my POCO. This works fine when the POCO's navigation property is a single object of related POCO. The problem is when the navigation property is a collection of related POCOs. The ReadAsDataContract() method throws an exception with message "Object reference not set to an instance of an object." Below are my POCOs. [DataContract(Namespace = "", Name = "Trip")] public class Trip { [DataMember(Order = 1)] public virtual int TripID { get; set; } [DataMember(Order = 2)] public virtual int RegionID { get; set; } [DataMember(Order = 3)] public virtual System.DateTime BookingDate { get; set; } [DataMember(Order = 4)] public virtual Region Region { // removed for brevity } } [DataContract(Namespace = "", Name = "Region")] public class Region { [DataMember(Order = 1)] public virtual int RegionID { get; set; } [DataMember(Order = 2)] public virtual string RegionCode { get; set; } [DataMember(Order = 3)] public virtual FixupCollection<Trip> Trips { // removed for brevity } } [CollectionDataContract(Namespace = "", Name = "{0}s", ItemName = "{0}")] [Serializable] public class FixupCollection<T> : ObservableCollection<T> { protected override void ClearItems() { new List<T>(this).ForEach(t => Remove(t)); } protected override void InsertItem(int index, T item) { if (!this.Contains(item)) { base.InsertItem(index, item); } } } And this is how I am trying retrieve a Region POCO. static void GetRegion() { string uri = "http://localhost:8080/TripService/Regions?id=1"; HttpClient client = new HttpClient(uri); using (HttpResponseMessage response = client.Get(uri)) { Region region; response.EnsureStatusIsSuccessful(); try { region = response.Content.ReadAsDataContract<Region>(); // this line throws exception because Region returns a collection of related trips Console.WriteLine(region.RegionName); } catch (Exception ex) { Console.WriteLine(ex.Message); } } } Would appreciate any pointers.

    Read the article

  • Paypal IPN Handler for Cart

    - by kimmothy16
    Hey everyone! I'm using a Paypal add to cart button for a simple ecommerce website. I had a Paypal IPN handler written in PHP that was successfully working for 'Buy it Now' buttons and purchasing one item at a time. I would run a database query each time to update the store's inventory to reflect the purchase. Now I'm upgrading this store to a 'cart' version so people could check out with multiple items at a time. Would anyone be able to tell me, in general, how my IPN handler would need to be altered to accommodate this? I'm unsure of what a response from Paypal looks like for a cart purchase as opposed to a buy it now purchase. Thanks, any help or examples of working IPN cart scripts would be very appreciated! My current code is below.. // Paypal POSTs HTML FORM variables to this page // we must post all the variables back to paypal exactly unchanged and add an extra parameter cmd with value _notify-validate // initialise a variable with the requried cmd parameter $req = 'cmd=_notify-validate'; // go through each of the POSTed vars and add them to the variable foreach ($_POST as $key => $value) { $value = urlencode(stripslashes($value)); $req .= "&$key=$value"; } // post back to PayPal system to validate $header .= "POST /cgi-bin/webscr HTTP/1.0\r\n"; $header .= "Content-Type: application/x-www-form-urlencoded\r\n"; $header .= "Content-Length: " . strlen($req) . "\r\n\r\n"; // In a live application send it back to www.paypal.com // but during development you will want to uswe the paypal sandbox // comment out one of the following lines $fp = fsockopen ('www.sandbox.paypal.com', 80, $errno, $errstr, 30); //$fp = fsockopen ('www.paypal.com', 80, $errno, $errstr, 30); // or use port 443 for an SSL connection //$fp = fsockopen ('ssl://www.paypal.com', 443, $errno, $errstr, 30); if (!$fp) { // HTTP ERROR } else { fputs ($fp, $header . $req); while (!feof($fp)) { $res = fgets ($fp, 1024); if (strcmp ($res, "VERIFIED") == 0) { $item_name = stripslashes($_POST['item_name']); $item_number = $_POST['item_number']; $item_id = $_POST['custom']; $payment_status = $_POST['payment_status']; $payment_amount = $_POST['mc_gross']; //full amount of payment. payment_gross in US $payment_currency = $_POST['mc_currency']; $txn_id = $_POST['txn_id']; //unique transaction id $receiver_email = $_POST['receiver_email']; $payer_email = $_POST['payer_email']; $size = $_POST['option_selection1']; $item_id = $_POST['item_id']; $business = $_POST['business']; if ($payment_status == 'Completed') { // UPDATE THE DATABASE }

    Read the article

  • Add objects to association in OnPreInsert, OnPreUpdate

    - by Dmitriy Nagirnyak
    Hi, I have an event listener (for Audit Logs) which needs to append audit log entries to the association of the object: public Company : IAuditable { // Other stuff removed for bravety IAuditLog IAuditable.CreateEntry() { var entry = new CompanyAudit(); this.auditLogs.Add(entry); return entry; } public virtual IEnumerable<CompanyAudit> AuditLogs { get { return this.auditLogs } } } The AuditLogs collection is mapped with cascading: public class CompanyMap : ClassMap<Company> { public CompanyMap() { // Id and others removed fro bravety HasMany(x => x.AuditLogs).AsSet() .LazyLoad() .Access.ReadOnlyPropertyThroughCamelCaseField() .Cascade.All(); } } And the listener just asks the auditable object to create log entries so it can update them: internal class AuditEventListener : IPreInsertEventListener, IPreUpdateEventListener { public bool OnPreUpdate(PreUpdateEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } public bool OnPreInsert(PreInsertEvent ev) { var audit = ev.Entity as IAuditable; if (audit == null) return false; Log(audit); return false; } private static void LogProperty(IAuditable auditable) { var entry = auditable.CreateAuditEntry(); entry.CreatedAt = DateTime.Now; entry.Who = GetCurrentUser(); // Might potentially execute a query. // Also other information is set for entry here } } The problem with it though is that it throws TransientObjectException when commiting the transaction: NHibernate.TransientObjectException : object references an unsaved transient instance - save the transient instance before flushing. Type: CompanyAudit, Entity: CompanyAudit at NHibernate.Engine.ForeignKeys.GetEntityIdentifierIfNotUnsaved(String entityName, Object entity, ISessionImplementor session) at NHibernate.Type.EntityType.GetIdentifier(Object value, ISessionImplementor session) at NHibernate.Type.ManyToOneType.NullSafeSet(IDbCommand st, Object value, Int32 index, Boolean[] settable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.WriteElement(IDbCommand st, Object elt, Int32 i, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.PerformInsert(Object ownerId, IPersistentCollection collection, IExpectation expectation, Object entry, Int32 index, Boolean useBatch, Boolean callable, ISessionImplementor session) at NHibernate.Persister.Collection.AbstractCollectionPersister.Recreate(IPersistentCollection collection, Object id, ISessionImplementor session) at NHibernate.Action.CollectionRecreateAction.Execute() at NHibernate.Engine.ActionQueue.Execute(IExecutable executable) at NHibernate.Engine.ActionQueue.ExecuteActions(IList list) at NHibernate.Engine.ActionQueue.ExecuteActions() at NHibernate.Event.Default.AbstractFlushingEventListener.PerformExecutions(IEventSource session) at NHibernate.Event.Default.DefaultFlushEventListener.OnFlush(FlushEvent event) at NHibernate.Impl.SessionImpl.Flush() at NHibernate.Transaction.AdoTransaction.Commit() As the cascading is set to All I expected NH to handle this. I also tried to modify the collection using state but pretty much the same happens. So the question is what is the last chance to modify object's associations before it gets saved? Thanks, Dmitriy.

    Read the article

  • Stuck on the logic of creating tags for posts (like SO tags)? (PHP)

    - by ggfan
    I am stuck on how to create tags for each post on my site. I am not sure how to add the tags into database. Currently... I have 3 tables: +---------------------+ +--------------------+ +---------------------+ | Tags | | Posting | | PostingTags | +---------------------+ +--------------------+ +---------------------+ | + TagID | | + posting_id | | + posting_id | +---------------------+ +--------------------+ +---------------------+ | + TagName | | + title | | + tagid | +---------------------+ +--------------------+ +---------------------+ The Tags table is just the name of the tags(ex: 1 PHP, 2 MySQL,3 HTML) The posting (ex: 1 What is PHP?, 2 What is CSS?, 3 What is HTML?) The postingtags shows the relation between posting and tags. When users type a posting, I insert the data into the "posting" table. It automatically inserts the posting_id for each post(posting_id is a primary key). $title = mysqli_real_escape_string($dbc, trim($_POST['title'])); $query4 = "INSERT INTO posting (title) VALUES ('$title')"; mysqli_query($dbc, $query4); HOWEVER, how do I insert the tags for each post? When users are filling out the form, there is a checkbox area for all the tags available and they check off whatever tags they want. (I am not doing where users type in the tags they want just yet) This shows each tag with a checkbox. When users check off each tag, it gets stored in an array called "postingtag[]". <label class="styled">Select Tags:</label> <?php $dbc = mysqli_connect(DB_HOST, DB_USER, DB_PASSWORD, DB_NAME); $query5 = "SELECT * FROM tags ORDER BY tagname"; $data5 = mysqli_query($dbc, $query5); while ($row5 = mysqli_fetch_array($data5)) { echo '<li><input type="checkbox" name="postingtag[]" value="'.$row5['tagname'].'" ">'.$row5['tagname'].'</li>'; } ?> My question is how do I insert the tags in the array ("postingtag") into my "postingtags" table? Should I... $postingtag = $_POST["postingtag"]; foreach($postingtag as $value){ $query5 = "INSERT INTO postingtags (posting_id, tagID) VALUES (____, $value)"; mysqli_query($dbc, $query5); } 1.In this query, how do I get the posting_id value of the post? I am stuck on the logic here, so if someone can help me explain the next step, I would appreciate it! Is there an easier way to insert tags?

    Read the article

  • C# Confusing Results from Performance Test

    - by aip.cd.aish
    I am currently working on an image processing application. The application captures images from a webcam and then does some processing on it. The app needs to be real time responsive (ideally < 50ms to process each request). I have been doing some timing tests on the code I have and I found something very interesting (see below). clearLog(); log("Log cleared"); camera.QueryFrame(); camera.QueryFrame(); log("Camera buffer cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); Each time the log is called the time since the beginning of the processing is displayed. Here is my log output: [3 ms]Log cleared [41 ms]Camera buffer cleared [41 ms]Sx: 589 Sy: 414 [112 ms]Camera output acuired for processing The timings are computed using a StopWatch from System.Diagonostics. QUESTION 1 I find this slightly interesting, since when the same method is called twice it executes in ~40ms and when it is called once the next time it took longer (~70ms). Assigning the value can't really be taking that long right? QUESTION 2 Also the timing for each step recorded above varies from time to time. The values for some steps are sometimes as low as 0ms and sometimes as high as 100ms. Though most of the numbers seem to be relatively consistent. I guess this may be because the CPU was used by some other process in the mean time? (If this is for some other reason, please let me know) Is there some way to ensure that when this function runs, it gets the highest priority? So that the speed test results will be consistently low (in terms of time). EDIT I change the code to remove the two blank query frames from above, so the code is now: clearLog(); log("Log cleared"); Sensor s = t.val; log("Sx: " + S.X + " Sy: " + S.Y); Image<Bgr, Byte> cameraImage = camera.QueryFrame(); log("Camera output acuired for processing"); The timing results are now: [2 ms]Log cleared [3 ms]Sx: 589 Sy: 414 [5 ms]Camera output acuired for processing The next steps now take longer (sometimes, the next step jumps to after 20-30ms, while the next step was previously almost instantaneous). I am guessing this is due to the CPU scheduling. Is there someway I can ensure the CPU does not get scheduled to do something else while it is running through this code?

    Read the article

  • JBOSS 7.1 started hanging after 6 months of deployment

    - by PVR
    My application is been live from 6 months. The application is host on jboss 7.1 server. From last few days I am finding numerous problem of hanging of jboss server. Though I restart the jboss server again, it does not invoke. I need to restart the server machine itself. Can anyone please let me know what could be the cause of these problems and the workable resolutions or any suggestion ? Kindly dont degrade the question as I am facing a lot problems due to this hanging issue. Also for the information, the application is based on Java, GWT, Hibernate 3. Please find the standalone.xml file in case if it helps. <extensions> <extension module="org.jboss.as.clustering.infinispan"/> <extension module="org.jboss.as.configadmin"/> <extension module="org.jboss.as.connector"/> <extension module="org.jboss.as.deployment-scanner"/> <extension module="org.jboss.as.ee"/> <extension module="org.jboss.as.ejb3"/> <extension module="org.jboss.as.jaxrs"/> <extension module="org.jboss.as.jdr"/> <extension module="org.jboss.as.jmx"/> <extension module="org.jboss.as.jpa"/> <extension module="org.jboss.as.logging"/> <extension module="org.jboss.as.mail"/> <extension module="org.jboss.as.naming"/> <extension module="org.jboss.as.osgi"/> <extension module="org.jboss.as.pojo"/> <extension module="org.jboss.as.remoting"/> <extension module="org.jboss.as.sar"/> <extension module="org.jboss.as.security"/> <extension module="org.jboss.as.threads"/> <extension module="org.jboss.as.transactions"/> <extension module="org.jboss.as.web"/> <extension module="org.jboss.as.webservices"/> <extension module="org.jboss.as.weld"/> </extensions> <system-properties> <property name="org.apache.coyote.http11.Http11Protocol.COMPRESSION" value="on"/> <property name="org.apache.coyote.http11.Http11Protocol.COMPRESSION_MIME_TYPES" value="text/javascript,text/css,text/html,text/xml,text/json"/> </system-properties> <management> <security-realms> <security-realm name="ManagementRealm"> <authentication> <properties path="mgmt-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> <security-realm name="ApplicationRealm"> <authentication> <properties path="application-users.properties" relative-to="jboss.server.config.dir"/> </authentication> </security-realm> </security-realms> <management-interfaces> <native-interface security-realm="ManagementRealm"> <socket-binding native="management-native"/> </native-interface> <http-interface security-realm="ManagementRealm"> <socket-binding http="management-http"/> </http-interface> </management-interfaces> </management> <profile> <subsystem xmlns="urn:jboss:domain:logging:1.1"> <console-handler name="CONSOLE"> <level name="INFO"/> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> </console-handler> <periodic-rotating-file-handler name="FILE"> <formatter> <pattern-formatter pattern="%d{HH:mm:ss,SSS} %-5p [%c] (%t) %s%E%n"/> </formatter> <file relative-to="jboss.server.log.dir" path="server.log"/> <suffix value=".yyyy-MM-dd"/> <append value="true"/> </periodic-rotating-file-handler> <logger category="com.arjuna"> <level name="WARN"/> </logger> <logger category="org.apache.tomcat.util.modeler"> <level name="WARN"/> </logger> <logger category="sun.rmi"> <level name="WARN"/> </logger> <logger category="jacorb"> <level name="WARN"/> </logger> <logger category="jacorb.config"> <level name="ERROR"/> </logger> <root-logger> <level name="INFO"/> <handlers> <handler name="CONSOLE"/> <handler name="FILE"/> </handlers> </root-logger> </subsystem> <subsystem xmlns="urn:jboss:domain:configadmin:1.0"/> <subsystem xmlns="urn:jboss:domain:datasources:1.0"> <datasources> <datasource jndi-name="java:jboss/datasources/ExampleDS" pool-name="ExampleDS" enabled="true" use-java-context="true"> <connection-url>jdbc:h2:mem:test;DB_CLOSE_DELAY=-1</connection-url> <driver>h2</driver> <security> <user-name>sa</user-name> <password>sa</password> </security> </datasource> <drivers> <driver name="h2" module="com.h2database.h2"> <xa-datasource-class>org.h2.jdbcx.JdbcDataSource</xa-datasource-class> </driver> </drivers> </datasources> </subsystem> <subsystem xmlns="urn:jboss:domain:deployment-scanner:1.1"> <deployment-scanner path="deployments" relative-to="jboss.server.base.dir" scan-interval="5000"/> </subsystem> <subsystem xmlns="urn:jboss:domain:ee:1.0"/> <subsystem xmlns="urn:jboss:domain:ejb3:1.2"> <session-bean> <stateless> <bean-instance-pool-ref pool-name="slsb-strict-max-pool"/> </stateless> <stateful default-access-timeout="5000" cache-ref="simple"/> <singleton default-access-timeout="5000"/> </session-bean> <pools> <bean-instance-pools> <strict-max-pool name="slsb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> <strict-max-pool name="mdb-strict-max-pool" max-pool-size="20" instance-acquisition-timeout="5" instance-acquisition-timeout-unit="MINUTES"/> </bean-instance-pools> </pools> <caches> <cache name="simple" aliases="NoPassivationCache"/> <cache name="passivating" passivation-store-ref="file" aliases="SimpleStatefulCache"/> </caches> <passivation-stores> <file-passivation-store name="file"/> </passivation-stores> <async thread-pool-name="default"/> <timer-service thread-pool-name="default"> <data-store path="timer-service-data" relative-to="jboss.server.data.dir"/> </timer-service> <remote connector-ref="remoting-connector" thread-pool-name="default"/> <thread-pools> <thread-pool name="default"> <max-threads count="10"/> <keepalive-time time="100" unit="milliseconds"/> </thread-pool> </thread-pools> </subsystem> <subsystem xmlns="urn:jboss:domain:infinispan:1.2" default-cache-container="hibernate"> <cache-container name="hibernate" default-cache="local-query"> <local-cache name="entity"> <transaction mode="NON_XA"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <local-cache name="local-query"> <transaction mode="NONE"/> <eviction strategy="LRU" max-entries="10000"/> <expiration max-idle="100000"/> </local-cache> <local-cache name="timestamps"> <transaction mode="NONE"/> <eviction strategy="NONE"/> </local-cache> </cache-container> </subsystem> <subsystem xmlns="urn:jboss:domain:jaxrs:1.0"/> <subsystem xmlns="urn:jboss:domain:jca:1.1"> <archive-validation enabled="true" fail-on-error="true" fail-on-warn="false"/> <bean-validation enabled="true"/> <default-workmanager> <short-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="10" unit="seconds"/> </short-running-threads> <long-running-threads> <core-threads count="50"/> <queue-length count="50"/> <max-threads count="50"/> <keepalive-time time="100" unit="seconds"/> </long-running-threads> </default-workmanager> <cached-connection-manager/> </subsystem> <subsystem xmlns="urn:jboss:domain:jdr:1.0"/> <subsystem xmlns="urn:jboss:domain:jmx:1.1"> <show-model value="true"/> <remoting-connector/> </subsystem> <subsystem xmlns="urn:jboss:domain:jpa:1.0"> <jpa default-datasource=""/> </subsystem> <subsystem xmlns="urn:jboss:domain:mail:1.0"> <mail-session jndi-name="java:jboss/mail/Default"> <smtp-server outbound-socket-binding-ref="mail-smtp"/> </mail-session> </subsystem> <subsystem xmlns="urn:jboss:domain:naming:1.1"/> <subsystem xmlns="urn:jboss:domain:osgi:1.2" activation="lazy"> <properties> <property name="org.osgi.framework.startlevel.beginning"> 1 </property> </properties> <capabilities> <capability name="javax.servlet.api:v25"/> <capability name="javax.transaction.api"/> <capability name="org.apache.felix.log" startlevel="1"/> <capability name="org.jboss.osgi.logging" startlevel="1"/> <capability name="org.apache.felix.configadmin" startlevel="1"/> <capability name="org.jboss.as.osgi.configadmin" startlevel="1"/> </capabilities> </subsystem> <subsystem xmlns="urn:jboss:domain:pojo:1.0"/> <subsystem xmlns="urn:jboss:domain:remoting:1.1"> <connector name="remoting-connector" socket-binding="remoting" security-realm="ApplicationRealm"/> </subsystem> <subsystem xmlns="urn:jboss:domain:resource-adapters:1.0"/> <subsystem xmlns="urn:jboss:domain:sar:1.0"/> <subsystem xmlns="urn:jboss:domain:security:1.1"> <security-domains> <security-domain name="other" cache-type="default"> <authentication> <login-module code="Remoting" flag="optional"> <module-option name="password-stacking" value="useFirstPass"/> </login-module> <login-module code="RealmUsersRoles" flag="required"> <module-option name="usersProperties" value="${jboss.server.config.dir}/application-users.properties"/> <module-option name="rolesProperties" value="${jboss.server.config.dir}/application-roles.properties"/> <module-option name="realm" value="ApplicationRealm"/> <module-option name="password-stacking" value="useFirstPass"/> </login-module> </authentication> </security-domain> <security-domain name="jboss-web-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> <security-domain name="jboss-ejb-policy" cache-type="default"> <authorization> <policy-module code="Delegating" flag="required"/> </authorization> </security-domain> </security-domains> </subsystem> <subsystem xmlns="urn:jboss:domain:threads:1.1"/> <subsystem xmlns="urn:jboss:domain:transactions:1.1"> <core-environment> <process-id> <uuid/> </process-id> </core-environment> <recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/> <coordinator-environment default-timeout="300"/> </subsystem> <subsystem xmlns="urn:jboss:domain:web:1.1" default-virtual-server="default-host" native="false"> <connector name="http" protocol="HTTP/1.1" scheme="http" socket-binding="http"/> <virtual-server name="default-host" enable-welcome-root="false"> <alias name="localhost"/> <alias name="nextenders.com"/> </virtual-server> </subsystem> <subsystem xmlns="urn:jboss:domain:webservices:1.1"> <modify-wsdl-address>true</modify-wsdl-address> <wsdl-host>${jboss.bind.address:127.0.0.1}</wsdl-host> <endpoint-config name="Standard-Endpoint-Config"/> <endpoint-config name="Recording-Endpoint-Config"> <pre-handler-chain name="recording-handlers" protocol-bindings="##SOAP11_HTTP ##SOAP11_HTTP_MTOM ##SOAP12_HTTP ##SOAP12_HTTP_MTOM"> <handler name="RecordingHandler" class="org.jboss.ws.common.invocation.RecordingServerHandler"/> </pre-handler-chain> </endpoint-config> </subsystem> <subsystem xmlns="urn:jboss:domain:weld:1.0"/> </profile> <interfaces> <interface name="management"> <inet-address value="${jboss.bind.address.management:127.0.0.1}"/> </interface> <interface name="public"> <inet-address value="${jboss.bind.address:127.0.0.1}"/> </interface> <interface name="unsecure"> <inet-address value="${jboss.bind.address.unsecure:127.0.0.1}"/> </interface> </interfaces> <socket-binding-group name="standard-sockets" default-interface="public" port-offset="${jboss.socket.binding.port-offset:0}"> <socket-binding name="management-native" interface="management" port="${jboss.management.native.port:9999}"/> <socket-binding name="management-http" interface="management" port="${jboss.management.http.port:9990}"/> <socket-binding name="management-https" interface="management" port="${jboss.management.https.port:9443}"/> <socket-binding name="ajp" port="8009"/> <socket-binding name="http" port="80"/> <socket-binding name="https" port="443"/> <socket-binding name="osgi-http" interface="management" port="8090"/> <socket-binding name="remoting" port="4447"/> <socket-binding name="txn-recovery-environment" port="4712"/> <socket-binding name="txn-status-manager" port="4713"/> <outbound-socket-binding name="mail-smtp"> <remote-destination host="localhost" port="25"/> </outbound-socket-binding> </socket-binding-group>

    Read the article

  • Please Explain Drupal schema and drupal_write_record

    - by Aaron
    Hi. A few questions. 1) Where is the best place to populate a new database table when a module is first installed, enabled? I need to go and get some data from an external source and want to do it transparently when the user installs/enables my custom module. I create the schema in {mymodule}_schema(), do drupal_install_schema({tablename}); in hook_install. Then I try to populate the table in hook_enable using drupal_write_record. I confirmed the table was created, I get no errors when hook_enable executes, but when I query the new table, I get no rows back--it's empty. Here's one variation of the code I've tried: /** * Implementation of hook_schema() */ function ncbi_subsites_schema() { // we know it's MYSQL, so no need to check $schema['ncbi_subsites_sites'] = array( 'description' => 'The base table for subsites', 'fields' => array( 'site_id' => array( 'description' => 'Primary id for site', 'type' => 'serial', 'unsigned' => TRUE, 'not null' => TRUE, ), // end site_id 'title' => array( 'description' => 'The title of the subsite', 'type' => 'varchar', 'length' => 255, 'not null' => TRUE, 'default' => '', ), //end title field 'url' => array( 'description' => 'The URL of the subsite in Production', 'type' => 'varchar', 'length' => 255, 'default' => '', ), //end url field ), //end fields 'unique keys' => array( 'site_id'=> array('site_id'), 'title' => array('title'), ), //end unique keys 'primary_key' => array('site_id'), ); // end schema return $schema; } Here's hook_install: function ncbi_subsites_install() { drupal_install_schema('ncbi_subsites'); } Here's hook_enable: function ncbi_subsites_enable() { drupal_get_schema('ncbi_subsites_site'); // my helper function to get data for table (not shown) $subsites = ncbi_subsites_get_subsites(); foreach( $subsites as $name=>$attrs ) { $record = new stdClass(); $record->title = $name; $record->url = $attrs['homepage']; drupal_write_record( 'ncbi_subsites_sites', $record ); } } Can someone tell me what I'm missing?

    Read the article

  • C# and NpgsqlDataAdapter returning a single string instead of a data table

    - by tme321
    I have a postgresql db and a C# application to access it. I'm having a strange error with values I return from a NpgsqlDataAdapter.Fill command into a DataSet. I've got this code: NpgsqlCommand n = new NpgsqlCommand(); n.Connection = connector; // a class member NpgsqlConnection DataSet ds = new DataSet(); DataTable dt = new DataTable(); // DBTablesRef are just constants declared for // the db table names and columns ArrayList cols = new ArrayList(); cols.Add(DBTablesRef.all); //all is just * ArrayList idCol = new ArrayList(); idCol.Add(DBTablesRef.revIssID); ArrayList idVal = new ArrayList(); idVal.Add(idNum); // a function parameter // Select builder and Where builder are just small // functions that return an sql statement based // on the parameters. n is passed to the where // builder because the builder uses named // parameters and sets them in the NpgsqlCommand // passed in String select = SelectBuilder(DBTablesRef.revTableName, cols) + WhereBuilder(n,idCol, idVal); n.CommandText = select; try { NpgsqlDataAdapter da = new NpgsqlDataAdapter(n); ds.Reset(); // filling DataSet with result from NpgsqlDataAdapter da.Fill(ds); // C# DataSet takes multiple tables, but only the first is used here dt = ds.Tables[0]; } catch (Exception e) { Console.WriteLine(e.ToString()); } So my problem is this: the above code works perfectly, just like I want it to. However, if instead of doing a select on all (*) if I try to name individual columns to return from the query I get the information I asked for, but rather than being split up into seperate entries in the data table I get a string in the first index of the data table that looked something like: "(0,5,false,Bob Smith,7)" And the data is correct, I would be expecting 0, then 5, then a boolean, then some text etc. But I would (obviously) prefer it to not be returned as just one big string. Anyone know why if I do a select on * I get a datatable as expected, but if I do a select on specific columns I get a data table with one entry that is the string of the values I'm asking for?

    Read the article

  • Fulltext search for django : Mysql not so bad ? (vs sphinx, xapian)

    - by Eric
    I am studying fulltext search engines for django. It must be simple to install, fast indexing, fast index update, not blocking while indexing, fast search. After reading many web pages, I put in short list : Mysql MYISAM fulltext, djapian/python-xapian, and django-sphinx I did not choose lucene because it seems complex, nor haystack as it has less features than djapian/django-sphinx (like fields weighting). Then I made some benchmarks, to do so, I collected many free books on the net to generate a database table with 1 485 000 records (id,title,body), each record is about 600 bytes long. From the database, I also generated a list of 100 000 existing words and shuffled them to create a search list. For the tests, I made 2 runs on my laptop (4Go RAM, Dual core 2.0Ghz): the first one, just after a server reboot to clear all caches, the second is done juste after in order to test how good are cached results. Here are the "home made" benchmark results : 1485000 records with Title (150 bytes) and body (450 bytes) Mysql 5.0.75/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 7m14.146s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 Mysql 5.5.4 m3/Ubuntu 9.04 Fulltext : ========================================================================== Full indexing : 6m08.154s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:11.553524 next run : 0:00:00.168508 1 thread, 100000 searchs with single word randomly taken from database : First run : 9m09s next run : 5m38s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:15.007353 1 thread, boolean search : 1000 x (+word1 +word2) First run : 0:00:21.205404 next run : 0:00:00.145098 Djapian Fulltext : ========================================================================== Full indexing : 84m7.601s 1 thread, 1000 searchs with single word randomly taken from database with prefetch : First run : 0:02:28.085680 next run : 0:00:14.300236 python-xapian Fulltext : ========================================================================== 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:26.402084 next run : 0:00:00.695092 django-sphinx Fulltext : ========================================================================== Full indexing : 1m25.957s 1 thread, 1000 searchs with single word randomly taken from database : First run : 0:01:30.073001 next run : 0:00:05.203294 1 thread, 100000 searchs with single word randomly taken from database : First run : 12m48s next run : 9m45s 1 thread, 10000 random strings (random strings should not be found in database) : just after the 100000 search test : 0:00:23.535319 1 thread, boolean search : 1000 x (word1 word2) First run : 0:00:20.856486 next run : 0:00:03.005416 As you can see, Mysql is not so bad at all for fulltext search. In addition, its query cache is very efficient. Mysql seems to me a good choice as there is nothing to install (I need just to write a small script to synchronize an Innodb production table to a MyISAM search table) and as I do not really need advanced search feature like stemming etc... Here is the question : What do you think about Mysql fulltext search engine vs sphinx and xapian ?

    Read the article

  • DevExpress Reporting Session Issue

    - by LeeHull
    This is pretty complicated so I will explain what is going on. I have created an ASP.NET website for displaying images, the way we use our images is, we have a database table that contains the URL where the images are located, however we recently started moving them from the filesystem, and storing them directly into the database as binary, but since we don't want to break older applications, we currently store them both places till they are all updated. Alright, the website I'm working on, is just a reporting website to display images by a date range, I have created an HTTPHandler to display the image, depending if they exist in the database as binary, if the row is DBNull, I just grab the URL from the other table instead, I just read the image, convert it into a byte[] and write it to the response, so it is interpreted as an image. I have created a page to display the report using DevExpress Reporting, I just query the report, save the dataset into a session, and the report reads the session to bind the report, i also have a picturebox in the report bound to "Handler.ashx?id=ImageID", this is because I am not binding a URL to the image, since it is a byte[] Also since there could be a report with 10000+ images, I am reading the dataset from the session inside the handler and just pulling out the image from the passed in ImageID, I am doing this to prevent connecting to the database each and every row. Now the strange thing is, Report loads fine.. data is loading, I have paging on the devexpress report viewer, it is loading the images fine, however here is the issue. When I print or export, I am not getting any images, I am debugged and found out the Session.Count is 0 when it tries to print or export, which is strange since there is more than just the dataset in the session. I have also added IRequiresSessionState to allow sessions in the handler, but the session count is still 0, I have changing the Handler to an aspx page, same issue. Any ideas or suggestions on logic changes, I'm all ears.. this is a very difficult situation since I have to display the images from database as a string AND byte[] since one can be URL and other be the image bytes, but I also can't slam the database on connection calls each row either. I have also reported the issue to DevExpress and they are clueless as well, since they don't dispose the session in the exporting process.

    Read the article

  • Global Entity Framework Context in WPF Application

    - by OffApps Cory
    Good day, I am in the middle of development of a WPF application that is using Entity Framework (.NET 3.5). It accesses the entities in several places throughout. I am worried about consistency throughout the application in regard to the entities. Should I be instancing separate contexts in my different views, or should I (and is a a good way to do this) instance a single context that can be accessed globally? For instance, my entity model has three sections, Shipments (with child packages and further child contents), Companies/Contacts (with child addresses and telephones), and disk specs. The Shipments and EditShipment views access the DiskSpecs, and the OptionsView manages the DiskSpecs (Create, Edit, Delete). If I edit a DiskSpec, I have to have something in the ShipmentsView to retrieve the latest specs if I have separate contexts right? If it is safe to have one overall context from which the rest of the app retrieves it's objects, then I imagine that is the way to go. If so, where would that instance be put? I am using VB.NET, but I can translate from C# pretty good. Any help would be appreciated. I just don't want one of those applications where the user has to hit reload a dozen times in different parts of the app to get the new data. Update: OK so I have changed my app as follows: All contexts are created in Using Blocks to dispose of them after they are no longer needed. When loaded, all entities are detatched from context before it is disposed. A new property in the MainViewModel (ContextUpdated) raises an event that all of the other ViewModels subscribe to which runs that ViewModels RefreshEntities method. After implementing this, I started getting errors saying that an entity can only be referenced by one ChangeTracker at a time. Since I could not figure out which context was still referencing the entity (shouldn't be any context right?) I cast the object as IEntityWithChangeTracker, and set SetChangeTracker to nothing (Null). This has let to the current problem: When I Null the changeTracker on the Entity, and then attach it to a context, it loses it's changed state and does not get updated to the database. However if I do not null the change tracker, I can't attach. I have my own change tracking code, so that is not a problem. My new question is, how are you supposed to do this. A good example Entity query and entity save code snipped would go a long way, cause I am beating my head in trying to get what I once thought was a simple transaction to work. Any help would elevate you to near god-hood.

    Read the article

  • ISQLQuery using AddJoin not loading related instance

    - by Remi Despres-Smyth
    I've got a problem using an ISQLQuery with an AddJoin. The entity I'm trying to return is RegionalFees, which has a composite-id which includes a Province instance. (This is the the instance being improperly loaded.) Here's the mapping: <class name="Project.RegionalFees, Project" table="tblRegionalFees"> <composite-id name="Id" class="Project.RegionalFeesId, project" unsaved-value="any" access="property"> <key-many-to-one class="Project.Province, Project" name="Region" access="property" column="provinceId" not-found="exception" /> <key-property name="StartDate" access="property" column="startDate" type="DateTime" /> </composite-id> <property name="SomeFee" column="someFee" type="Decimal" /> <property name="SomeOtherFee" column="someOtherFee" type="Decimal" /> <!-- Other unrelated stuff --> </class> <class name="Project.Province, Project" table="trefProvince" mutable="false"> <id name="Id" column="provinceId" type="Int64" unsaved-value="0"> <generator class="identity" /> </id> <property name="Code" column="code" access="nosetter.pascalcase-m-underscore" /> <property name="Label" column="label" access="nosetter.pascalcase-m-underscore" /> </class> Here's my query method: public IEnumerable<RegionalFees> GetRegionalFees() { // Using an ISQLQuery cause there doesn't appear to be an equivalent of // the SQL HAVING clause, which would be optimal for loading this set const String qryStr = "SELECT * " + "FROM tblRegionalFees INNER JOIN trefProvince " + "ON tblRegionalFees.provinceId=trefProvince.provinceId " + "WHERE EXISTS ( " + "SELECT provinceId, MAX(startDate) AS MostRecentFeesDate " + "FROM tblRegionalFees InnerRF " + "WHERE tblRegionalFees.provinceId=InnerRF.provinceId " + "AND startDate <= ? " + "GROUP BY provinceId " + "HAVING tblRegionalFees.startDate=MAX(startDate))"; var qry = NHibernateSessionManager.Instance.GetSession().CreateSQLQuery(qryStr); qry.SetDateTime(0, DateTime.Now); qry.AddEntity("RegFees", typeof(RegionalFees)); qry.AddJoin("Region", "RegFees.Id.Region"); return qry.List<RegionalFees>(); } The odd behavior here is that when I call GetRegionalFees (whose goal is to load just the most recent fee instances per region), it all loads fine if the Province instance is a proxy. If, however, Province is not loaded as a transparent proxy, the Province instance which is part of RegionalFees' RegionalFeesId property has null Code and Region values, although the Id value is set correctly. It looks to me like I have a problem in how I'm joining the Province class - since if it's lazy loaded the id is set from tblRegionalFees, and it gets loaded independently afterwards - but I haven't been able to figure out the solution.

    Read the article

  • Executing procedural queries in perl

    - by KalpaG
    I have a query like this set @valid_total:=0; set @invalid_total:=0; select week as weekno, measured_week,project_id as project, role_category_id as role_category, valid_count,valid_tickets, (@valid_total := @valid_total + valid_count) as valid_total, invalid_count,invalid_tickets, (@invalid_total := @invalid_total + invalid_count) as invalid_total from metric_fault_bug_project where measured_week = yearweek(curdate()) and role_category_id = 1 and project_id = 11; it executes fine in heidi (MySQL client) but when it comes to perl, it gives me this error DBD::mysql::st execute failed: You have an error in your SQL syntax; check the m anual that corresponds to your MySQL server version for the right syntax to use near ':=0; set :=0; select week as weekno, measured_week,project_id as project' at line 1 at D:\Mx\scripts\test.pl line 35. Can't execute SQL statement: You have an error in your SQL syntax; check the man ual that corresponds to your MySQL server version for the right syntax to use ne ar ':=0; set :=0; select week as weekno, measured_week,project_id as project' at line 1 The problem seem to be in the set @valid_total := 0; line. I am fairly new to Perl. Can anyone help? this is the complete perl code #!/usr/bin/perl #use lib '/x01/home/kalpag/libs'; use DBI; use strict; use warnings; my $sid = 'issues'; my $user = 'root'; my $passwd = 'kalpa'; my $connection = "DBI:mysql:database=$sid;host=localhost"; my $dbhh = DBI->connect( $connection, $user, $passwd) || die "Database connection not made: $DBI::errstr"; my $sql_query = 'set @valid_total:=0; set @invalid_total:=0; select week as weekno, measured_week,project_id, role_category_id as role_category, valid_count,valid_tickets, (@valid_total := @valid_total + valid_count) as valid_total, invalid_count,invalid_tickets, (@invalid_total := @invalid_total + invalid_count) as invalid_total from metric_fault_bug_project where measured_week = yearweek(curdate()) and role_category_id = 1 and project_id = 11'; my $sth = $dbhh->prepare($sql_query) or die "Can't prepare SQL statement: $DBI::errstr\n"; $sth->execute() or die "Can't execute SQL statement: $DBI::errstr\n"; while ( my @memory = $sth->fetchrow() ) { print "@memory \n"; }

    Read the article

  • UTF8 encoding not working when using ajax.

    - by Ronedog
    I recently changed some of my pages to be displayed via ajax and I am having some confusion as to why the utf8 encoding is now displaying a question mark inside of a box, whereas before it wasn't. Fore example. The oringal page was index.php. charset was explicitly set to utf8 and is in the <head>. I then used php to query the database Heres is the original index.php page: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <title>Title here</title> </head> <body class='body_bgcolor' > <div id="main_container"> <?php Data displayed via php was simply a select statement that output the HTML. ?> </div> However, when I made the change to add a menu that populated the "main_container" via ajax all the utf8 encoding stopped working. Here's the new code: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html lang="en"> <head> <meta http-equiv="content-type" content="text/html; charset=UTF-8"> <title>Title here</title> </head> <body class='body_bgcolor' > <a href="#" onclick="display_html('about_us');"> About Us </a> <div id="main_container"></div> The "display_html()" function calls the javascript page which uses jquery ajax call to retrieve the html stored inside a php page, then places the html inside the div with an id of "main_container". I'm setting the charset in jquery to be utf8 like: $.ajax({ async: false, type: "GET", url: url, contentType: "charset=utf-8", success: function(data) { $("#main_container").html(data); } }); What am I doing wrong?

    Read the article

  • VBA/SQL recordsets

    - by intruesiive
    The project I'm asking about is for sending an email to teachers asking what books they're using for the classes they're teaching next semester, so that the books can be ordered. I have a query that compares the course number of this upcoming semester's classes to the course numbers of historical textbook orders, pulling out only those classes that are being taught this semester. That's where I get lost. I have a table that contains the following: -Professor -Course Number -Year -Book -Title The data looks like this: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 12 1111 Pride and Prejudice smith 11 1222 Infinite Jest smith 10 1333 The Bible smith 13 1333 The Bible smith 12 1222 The Alchemist smith 10 1111 Moby Dick johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 10 1333 Everything is Illuminated johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time johnson 10 1333 Great Expectations johnson 9 1222 Proust on the Shore Here's what I need the code to do "on paper": Group the records by professor. Determine every unique course number in that group, and group records by course number. For each unique course number, determine the highest year associated. Then spit out every record with that professor+course number+year combination. With the sample data, the results would be: professor year course number title smith 13 1111 Pride and Prejudice smith 13 1111 The Fountainhead smith 13 1222 The Alchemist smith 13 1333 The Bible johnson 12 1222 The Tipping Point johnson 11 1333 Anna Kerenina johnson 12 1222 The Savage Detectives johnson 11 1333 In Search of Lost Time I'm thinking I should make a record set for each teacher, and within that, another record set for each course number. Within the course number record set, I need the system to determine what the highest year number is - maybe store that in a variable? Then pull out every associated record so that if the teacher ordered 3 books the last time they taught that class (whether it was in 2013 or 2012 and so on) all three books display. I'm not sure I'm thinking of record sets in the right way, though. My SQL so far is basic and clearly doesn't work: SELECT [All].Professor, [All].Course, Max([All].Year) FROM [All] GROUP BY [All].Professor, [All].Course; Thanks for your help.

    Read the article

  • Calculate year for end date: PostgreSQL

    - by Dave Jarvis
    Background Users can pick dates as shown in the following screen shot: Any starting month/day and ending month/day combinations are valid, such as: Mar 22 to Jun 22 Dec 1 to Feb 28 The second combination is difficult (I call it the "tricky date scenario") because the year for the ending month/day is before the year for the starting month/day. That is to say, for the year 1900 (also shown selected in the screen shot above), the full dates would be: Dec 22, 1900 to Feb 28, 1901 Dec 22, 1901 to Feb 28, 1902 ... Dec 22, 2007 to Feb 28, 2008 Dec 22, 2008 to Feb 28, 2009 Problem Writing a SQL statement that selects values from a table with dates that fall between the start month/day and end month/day, regardless of how the start and end days are selected. In other words, this is a year wrapping problem. Inputs The query receives as parameters: Year1, Year2: The full range of years, independent of month/day combination. Month1, Day1: The starting day within the year to gather data. Month2, Day2: The ending day within the year (or the next year) to gather data. Previous Attempt Consider the following MySQL code (that worked): end_year = start_year + greatest( -1 * sign( datediff( date( concat_ws('-', year, end_month, end_day ) ), date( concat_ws('-', year, start_month, start_day ) ) ) ), 0 ) How it works, with respect to the tricky date scenario: Create two dates in the current year. The first date is Dec 22, 1900 and the second date is Feb 28, 1900. Count the difference, in days, between the two dates. If the result is negative, it means the year for the second date must be incremented by 1. In this case: Add 1 to the current year. Create a new end date: Feb 28, 1901. Check to see if the date range for the data falls between the start and calculated end date. If the result is positive, the dates have been provided in chronological order and nothing special needs to be done. This worked in MySQL because the difference in dates would be positive or negative. In PostgreSQL, the equivalent functionality always returns a positive number, regardless of their relative chronological order. Question How should the following (broken) code be rewritten for PostgreSQL to take into consideration the relative chronological order of the starting and ending month/day pairs (with respect to an annual temporal displacement)? SELECT m.amount FROM measurement m WHERE (extract(MONTH FROM m.taken) >= month1 AND extract(DAY FROM m.taken) >= day1) AND (extract(MONTH FROM m.taken) <= month2 AND extract(DAY FROM m.taken) <= day2) Any thoughts, comments, or questions? (The dates are pre-parsed into MM/DD format in PHP. My preference is for a pure PostgreSQL solution, but I am open to suggestions on what might make the problem simpler using PHP.) Versions PostgreSQL 8.4.4 and PHP 5.2.10

    Read the article

  • algorithm optimization: returning object and sub-objects from single SQL statement in PHP

    - by pocketfullofcheese
    I have an object oriented PHP application. I have a simple hierarchy stored in an SQL table (Chapters and Authors that can be assigned to a chapter). I wrote the following method to fetch the chapters and the authors in a single query and then loop through the result, figuring out which rows belong to the same chapter and creating both Chapter objects and arrays of Author objects. However, I feel like this can be made a lot neater. Can someone help? function getChaptersWithAuthors($monographId, $rangeInfo = null) { $result =& $this->retrieveRange( 'SELECT mc.chapter_id, mc.monograph_id, mc.chapter_seq, ma.author_id, ma.monograph_id, mca.primary_contact, mca.seq, ma.first_name, ma.middle_name, ma.last_name, ma.affiliation, ma.country, ma.email, ma.url FROM monograph_chapters mc LEFT JOIN monograph_chapter_authors mca ON mc.chapter_id = mca.chapter_id LEFT JOIN monograph_authors ma ON ma.author_id = mca.author_id WHERE mc.monograph_id = ? ORDER BY mc.chapter_seq, mca.seq', $monographId, $rangeInfo ); $chapterAuthorDao =& DAORegistry::getDAO('ChapterAuthorDAO'); $chapters = array(); $authors = array(); while (!$result->EOF) { $row = $result->GetRowAssoc(false); // initialize $currentChapterId for the first row if ( !isset($currentChapterId) ) $currentChapterId = $row['chapter_id']; if ( $row['chapter_id'] != $currentChapterId) { // we're on a new row. create a chapter from the previous one $chapter =& $this->_returnFromRow($prevRow); // set the authors with all the authors found so far $chapter->setAuthors($authors); // clear the authors array unset($authors); $authors = array(); // add the chapters to the returner $chapters[$currentChapterId] =& $chapter; // set the current id for this row $currentChapterId = $row['chapter_id']; } // add every author to the authors array if ( $row['author_id'] ) $authors[$row['author_id']] =& $chapterAuthorDao->_returnFromRow($row); // keep a copy of the previous row for creating the chapter once we're on a new chapter row $prevRow = $row; $result->MoveNext(); if ( $result->EOF ) { // The result set is at the end $chapter =& $this->_returnFromRow($row); // set the authors with all the authors found so far $chapter->setAuthors($authors); unset($authors); // add the chapters to the returner $chapters[$currentChapterId] =& $chapter; } } $result->Close(); unset($result); return $chapters; } PS: the _returnFromRow methods simply construct an Chapter or Author object given the SQL row. If needed, I can post those methods here.

    Read the article

  • Class Problem (c++ and prolog)

    - by Joshua Green
    I am using the C++ interface to Prolog (the classes and methods of SWI-cpp.h). For working out a simple backtracking that john likes mary and emma and sara: likes(john, mary). likes(john, emma). likes(john, ashley). I can just do: { PlFrame fr; PlTermv av(2); av[0] = PlCompound("john"); PlQuery q("likes", av); while (q.next_solution()) { cout << (char*)av[1] << endl; } } This works in a separate code, so the syntax is correct. But I am also trying to get this simple backtracking to work within a class: class UserTaskProlog { public: UserTaskProlog(ArRobot* r); ~UserTaskProlog(); protected: int cycles; char* argv[1]; ArRobot* robot; void logTask(); }; This class works fine, with my cycles variable incrementing every robot cycle. However, when I run my main code, I get an Unhandled Exception error message: UserTaskProlog::UserTaskProlog(ArRobot* r) : robotTaskFunc(this, &UserTaskProlog::logTask) { cycles = 0; PlEngine e(argv[0]); PlCall("consult('myFile.pl')"); robot->addSensorInterpTask("UserTaskProlog", 50, &robotTaskFunc); } UserTaskProlog::~UserTaskProlog() { robot->remSensorInterpTask(&robotTaskFunc); // Do I need a destructor here for pl? } void UserTaskProlog::logTask() { cycles++; cout << cycles; { PlFrame fr; PlTermv av(2); av[0] = PlCompound("john"); PlQuery q("likes", av); while (q.next_solution()) { cout << (char*)av[1] << endl; } } } I have my opening and closing brackets for PlFrame. I have my frame, my query, etc... The exact same code that backtracks and prints out mary and emma and sara. What am I missing here that I get an error message? Here is what I think the code should do: I expect mary and emma and sara to be printed out once, every time cycles increments. However, it opens SWI-cpp.h file automatically and points to class PlFrame. What is it trying to tell me? I don't see anything wrong with my PlFrame class declaration. Thanks,

    Read the article

< Previous Page | 666 667 668 669 670 671 672 673 674 675 676 677  | Next Page >