Search Results

Search found 17537 results on 702 pages for 'doctrine query'.

Page 648/702 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • Linq-to-XML explicit casting in a generic method

    - by vlad
    I've looked for a similar question, but the only one that was close didn't help me in the end. I have an XML file that looks like this: <Fields> <Field name="abc" value="2011-01-01" /> <Field name="xyz" value="" /> <Field name="tuv" value="123.456" /> </Fields> I'm trying to use Linq-to-XML to get the values from these fields. The values can be of type Decimal, DateTime, String and Int32. I was able to get the fields one by one using a relatively simple query. For example, I'm getting the 'value' from the field with the name 'abc' using the following: private DateTime GetValueFromAttribute(IEnumerable<XElement> fields, String attName) { return (from field in fields where field.Attribute("name").Value == "abc" select (DateTime)field.Attribute("value")).FirstOrDefault() } this is placed in a separate function that simply returns this value, and everything works fine (since I know that there is only one element with the name attribute set to 'abc'). however, since I have to do this for decimals and integers and dates, I was wondering if I can make a generic function that works in all cases. this is where I got stuck. here's what I have so far: private T GetValueFromAttribute<T>(IEnumerable<XElement> fields, String attName) { return (from field in fields where field.Attribute("name").Value == attName select (T)field.Attribute("value").Value).FirstOrDefault(); } this doesn't compile because it doesn't know how to convert from String to T. I tried boxing and unboxing (i.e. select (T) (Object) field.Attribute("value").Value but that throws a runtime Specified cast is not valid exception as it's trying to convert the String to a DateTime, for instance. Is this possible in a generic function? can I put a constraint on the generic function to make it work? or do I have to have separate functions to take advantage of Linq-to-XML's explicit cast operators?

    Read the article

  • Best way to track the stages of a form across different controllers - $_GET or routing

    - by chrisj
    Hi, I am in a bit of a dilemma about how best to handle the following situation. I have a long registration process on a site, where there are around 10 form sections to fill in. Some of these forms relate specifically to the user and their own personal data, while most of them relate to the user's pets - my current set up handles user specific forms in a User_Controller (e.g via methods like user/profile, user/household etc), and similarly the pet related forms are handled in a Pet_Controller (e.g pet/health). Whether or not all of these methods should be combined into a single Registration_Controller, I'm not sure - I'm open to any advice on that. Anyway, my main issue is that I want to generate a progress bar which shows how far along in the registration process each user is. As the urls in each form section can potentially be mapping to different controllers, I'm trying to find a clean way to extract which stage a person is at in the overall process. I could just use the query string to pass a stage parameter with each request, e.g user/profile?stage=1. Another way to do it potentially is to use routing - e.g the urls for each section of the form could be set up to be registration/stage/1, registration/stage/2 - then i could just map these urls to the appropriate controller/method behind the scenes. If this makes any sense at all, does anyone have any advice for me?

    Read the article

  • Why does the WCF 3.5 REST Starter Kit do this?

    - by Brandon
    I am setting up a REST endpoint that looks like the following: [WebInvoke(Method = "POST", UriTemplate = "?format=json", BodyStyle = WebMessageBodyStyle.WrappedRequest, ResponseFormat = WebMessageFormat.Json)] and [WebInvoke(Method = "DELETE", UriTemplate = "?token={token}&format=json", ResponseFormat = WebMessageFormat.Json)] The above throws the following error: UriTemplateTable does not support '?format=json' and '?token={token}&format=json' since they are not equivalent, but cannot be disambiguated because they have equivalent paths and the same common literal values for the query string. See the documentation for UriTemplateTable for more detail. I am not an expert at WCF, but I would imagine that it should map first by the HTTP Method and then by the URI Template. It appears to be backwards. If both of my URI templates are: ?token={token}&format=json This works because they are equivalent and it then appears to look at the HTTP Method where one is POST and the other is DELETE. Is REST supposed to work this way? Why are the URI Template Tables not being sorted first by HTTP Method and then by URI Template? This can cause some serious frustrations when 1 HTTP Method requires a parameter and another does not, or if I want to do optional parameters (e.g. if the 'format' parameter is not passed, default to XML).

    Read the article

  • logic before dispatcher + controller?

    - by Spoonface
    I believe for a typical MVC web application the router / dispatcher routine is used to decide which controller is loaded based primarily on the area requested in the url by the user. However, in addition to checking the url query string, I also like to use the dispatcher to check whether the user is currently logged in or not to decide which controller is loaded. For example if they are logged in and request the login page, the dispatcher would load their account instead. But is this a fairly non-standard design? Would it violate MVC in any way? I only ask as the examples I've read through this weekend have had no major calculations performed before the dispatcher routine, and commonly check whether the user is logged in or not per controller, and then redirect where necessary. But to me it seems odd to redirect a logged in user from the login area to account area if you could just load the account controller in the first place? I hope I've explained my consternation well enough, but could anyone offer some details on how they handle logged in users, and similar session data?

    Read the article

  • IIS to SQL Server kerberos auth issues

    - by crosan
    We have a 3rd party product that allows some of our users to manipulate data in a database (on what we'll call SvrSQL) via a website on a separate server (SvrWeb). On SvrWeb, we have a specific, non-default website setup for this application so instead of going to http://SvrWeb.company.com to get to the website we use http://application.company.com which resolves to SvrWeb and the host headers resolve to the correct website. There is also a specific application pool set up for this site which uses an Active Directory account identity we'll call "company\SrvWeb_iis". We're setup to allow delegation on this account and to allow it to impersonate another login which we want it to do. (we want this account to pass along the AD credentials of the person signed into the website to SQL Server instead of a service account. We also set up the SPNs for the SrvWeb_iis account via the following command: setspn -A HTTP/SrvWeb.company.com SrvWeb_iis The website pulls up, but the section of the website that makes the call to the database returns the message: Cannot execute database query. Login failed for user 'NT AUTHORITY\ANONYMOUS LOGON'. I thought we had the SPN information set up correctly, but when I check the security event log on SrvWeb I see entries of my logging in, but it seems to be using NTLM and not kerberos: Logon Type: 3 Logon Process: NtLmSsp Authentication Package: NTLM Any ideas or articles that cover this setup in detail would be extremely appreciated! If it helps, we are using SQL Server 2005, and both the web and SQL servers are Windows 2003.

    Read the article

  • Hosting an Access DB

    - by Mitciv
    Hey, So I'm inexperienced in hosting DB's and I've always had the luxury of someone else getting the db setup. I was going to help a friend out with getting a webpage setup, I've got experience in Asp.Net MVC so I'm going with that. They want to setup a search page to query a db and display the results. My question I have is in getting the DB setup and hosted. They currently just have the Access DB on a local computer. There is basically only one table that would need to be queried for the search. What is the best approach to getting this table/db accessible? They would like to keep the main copy of the db on the local machine, so copying the entire db over to the hosted site would be time consuming, could the lone table needed be solely copied to the host? Should I try to convince them to make changes on the hosted db and just make copies of that for their local machines? Any suggestions are welcome, Again I'm a total noob when it comes to hosting databases. Thanks

    Read the article

  • Reporting Services 2005 - Parameter reliant on cascading parameters

    - by sHr0oMaN
    Good day I have the following: In a SSRS 2005 report I have three report parameters: FinancialPeriodType ("Month" or "Week" in a DropDownList), FinancialPeriod (cascading DropDownList populated depending on first selection) and another parameter, OpeningBalance, of type float. The first two parameters are cascading i.e. the first parameter is used by the query populating the second's available values. This works fine. What I'm attemping to do is default the value of OpeningBalance to a value from a dataset populated by a stored procedure which takes in the first two parameters. However as soon as I select a value for the first parameter, I get the following error: An error occurred during report processing. The value for the report parameter 'OpeningBalance' is not valid for its type.' I've tried setting the default of the second parameter to be a meaningful default (something like 200901) as well as defaulting the second parameter in the SQL store procedure with no affect. Using SQL Profiler I've noticed that selecting a value for the first parameter doesn't even execute the SQL used to obtain available values for the second parameter.

    Read the article

  • Php JSON Response Array

    - by Nick Kl
    I have this php code. As you can see i query a mysql database through a function showallevents. I return a the $result to the $event variable. I try to return all rows of the data that i take with the msql_fetch_assoc. I don't get response even when i encode the $response variable. It returns null to all fields. Can anyone help me on what i am doing wrong. I had a valid code but it was returning only 1 row of data so i tried to make an associative array but seems i am failing. if ($tag == 'showallevents') { // Request type is show all events // show all events $event = $db->showallevents(); if ($event != false) { $data = array(); while($row = mysql_fetch_assoc($event)) { $data[] = array( $response["success"] = 1, $response["uid"] = $event["uid"], $response["event"]["date"] = $event["date"], $response["event"]["hours"] = $event["hours"], $response["event"]["store_name"] = $event["store_name"], $response["event"]["event_information"] = $event["event_information"], $response["event"]["event_type"] = $event["event_type"], $response["event"]["Phone"] = $event["Phone"], $response["event"]["address"] = $event["address"], $response["event"]["created_at"] = $event["created_at"], $response["event"]["updated_at"] = $event["updated_at"]); } echo json_encode($data); } else { // event not found // echo json with error = 1 $response["error"] = 1; $response["error_msg"] = "Events not found"; echo json_encode($response); } } else { echo "Access Denied"; } } ?>

    Read the article

  • How would you organize this in asp.net mvc?

    - by chobo
    I have an asp.net mvc 2.0 application that contains Areas/Modules like calendar, admin, etc... There may be cases where more than one area needs to access the same Repo, so I am not sure where to put the Data Access Layers and Repositories. First Option: Should I create Data Access Layer files (Linq to SQL in my case) with their accompanying Repositories for each area, so each area only contains the Tables, and Repositories needed by those areas. The benefit is that everything needed to run that module is one place, so it is more encapsulated (in my mind anyway). The downside is that I may have duplicate queries, because other modules may use the same query. Second Option Or, would it be better to place the DAL and Repositories outside the Area's and treat them as Global? The advantage is I won't have any duplicate queries, but I may be loading a lot of unnecessary queries and DAL tables up for certain modules. It is also more work to reuse or modify these modules for future projects (though the chance of reusing them is slim to none :)) Which option makes more sense? If someone has a better way I'd love to hear it. Thanks!

    Read the article

  • Conversion failed when converting datetime from character string

    - by salvationishere
    I am developing a C# VS 2008 / SQL Server 2005 Express website application. I have tried some of the fixes for this problem but my call stack differs from others. And these fixes did not fix my problem. What steps can I take to troubleshoot this? Here is my error: System.Data.SqlClient.SqlException was caught Message="Conversion failed when converting datetime from character string." Source=".Net SqlClient Data Provider" ErrorCode=-2146232060 LineNumber=10 Number=241 Procedure="AppendDataCT" Server="\\\\.\\pipe\\772EF469-84F1-43\\tsql\\query" State=1 StackTrace: at System.Data.SqlClient.SqlConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.SqlInternalConnection.OnError(SqlException exception, Boolean breakConnection) at System.Data.SqlClient.TdsParser.ThrowExceptionAndWarning(TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.Run(RunBehavior runBehavior, SqlCommand cmdHandler, SqlDataReader dataStream, BulkCopySimpleResultSet bulkCopyHandler, TdsParserStateObject stateObj) at System.Data.SqlClient.SqlCommand.FinishExecuteReader(SqlDataReader ds, RunBehavior runBehavior, String resetOptionsString) at System.Data.SqlClient.SqlCommand.RunExecuteReaderTds(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, Boolean async) at System.Data.SqlClient.SqlCommand.RunExecuteReader(CommandBehavior cmdBehavior, RunBehavior runBehavior, Boolean returnStream, String method, DbAsyncResult result) at System.Data.SqlClient.SqlCommand.InternalExecuteNonQuery(DbAsyncResult result, String methodName, Boolean sendToPipe) at System.Data.SqlClient.SqlCommand.ExecuteNonQuery() at ADONET_namespace.ADONET_methods.AppendDataCT(DataTable dt, Dictionary`2 dic) in c:\Documents and Settings\Admin\My Documents\Visual Studio 2008\WebSites\Jerry\App_Code\ADONET methods.cs:line 102 And here is the related code. When I debugged this code, "dic" only looped through the 3 column names, but did not look into row values which are stored in "dt", the Data Table. public static string AppendDataCT(DataTable dt, Dictionary<string, string> dic) { if (dic.Count != 3) throw new ArgumentOutOfRangeException("dic can only have 3 parameters"); string connString = ConfigurationManager.ConnectionStrings["AW3_string"].ConnectionString; string errorMsg; try { using (SqlConnection conn2 = new SqlConnection(connString)) { using (SqlCommand cmd = conn2.CreateCommand()) { cmd.CommandText = "dbo.AppendDataCT"; cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = conn2; foreach (string s in dic.Keys) { SqlParameter p = cmd.Parameters.AddWithValue(s, dic[s]); p.SqlDbType = SqlDbType.VarChar; } conn2.Open(); cmd.ExecuteNonQuery(); conn2.Close(); errorMsg = "The Person.ContactType table was successfully updated!"; } } }

    Read the article

  • Why do I get null objects in a many-to-many bag?

    - by Jim Geurts
    I have a bag defined for a many-to-many list: <class name="Author" table="Authors"> <id name="Id" column="AuthorId"> <generator class="identity" /> </id> <property name="Name" /> <bag name="Books" table="Author_Book_Map" where="IsDeleted=0" fetch="join"> <key column="AuthorId" /> <many-to-many class="Book" column="BookId" where="IsDeleted=0" /> </bag> </class> If I return all author objects using something like the following, I will get what initially appeared to be duplicate Author records: Session.Query<Author>().List<Author>() The extra author objects are created when an author is mapped to Book objects that have IsDeleted = 1 and IsDeleted = 0. Rather than creating one Author object with an enumerable that contains only the books with IsDeleted = 0, it will create two author objects. The first author object has a Books enumerable that contains books with IsDeleted = 0. The second author object will contain an enumerable of null book objects. Similarly, if an object only has one book map, and that map points to a book with IsDeleted = 1, then an author object is returned with a Books collection having one null object. I'm thinking part of the problem stems from the map table objects linking to rows that satisfy the where condition on the bag object but do not meet the many-to-many where condition. This is happening with NHibernate version 3.0.0.4980. Is this a configuration issue or something else?

    Read the article

  • unexpected behaviour of object stored in web service Session

    - by draconis
    Hi. I'm using Session variables inside a web service to maintain state between successive method calls by an external application called QBWC. I set this up by decorating my web service methods with this attribute: [WebMethod(EnableSession = true)] I'm using the Session variable to store an instance of a custom object called QueueManager. The QueueManager has a property called ChangeQueue which looks like this: [Serializable] public class QueueManager { ... public Queue<QBChange> ChangeQueue { get; set; } ... where QBChange is a custom business object belonging to my web service. Now, every time I get a call to a method in my web service, I use this code to retrieve my QueueManager object and access my queue: QueueManager qm = (QueueManager)Session[ticket]; then I remove an object from the queue, using qm.dequeue() and then I save the modified query manager object (modified because it contains one less object in the queue) back to the Session variable, like so: Session[ticket] = qm; ready for the next web service method call using the same ticket. Now here's the thing: if I comment out this last line //Session[ticket] = qm; , then the web service behaves exactly the same way, reducing the size of the queue between method calls. Now why is that? The web service seems to be updating a class contained in serialized form in a Session variable without being asked to. Why would it do that? When I deserialize my Queuemanager object, does the qm variable hold a reference to the serialized object inside the Session[ticket] variable?? This seems very unlikely.

    Read the article

  • XPath: limit scope of result set

    - by Laramie
    Given the XML <a> <c> <b id="1" value="noob"/> </c> <b id="2" value="tube"/> <a> <c> <b id="3" value="foo"/> </c> <b id="4" value="goo"/> <b id="5" value="noob"/> <a> <b id="6" value="near"/> <b id="7" value="bar"/> </a> </a> </a> and the Xpath 1.0 query //b[@id=2]/ancestor::a[1]//b[@value="noob"] is there some way to limit the result set to the <b> elements that are ONLY the children of the immediate <a> element of the start node (//b[@id=2])? For example the Xpath above returns both node ids 1 and 5. The goal is to limit the result to just node id=1 since it is the only @value="noob" element in the same <c> group as our start node (//b[@id=2]). In English, "Starting at a node whose id is equal to 2, find all the elements whose value is "noob" that are descendants of the immediate parent c element without passing through another c element".

    Read the article

  • How do I implement Hibernate Pagination using a cursor (so the results stay consistent, despite new

    - by hunterae
    Hey all, Is there any way to maintain a database cursor using Hibernate between web requests? Basically, I'm trying to implement pagination, but the data that is being paged is consistently changing (i.e. new records are added into the database). We are trying to set it up such that when you do your initial search (returning a maximum of 5000 results), and you page through the results, those same records always appear on the same page (i.e. we're not continuously running the query each time next and previous page buttons are clicked). The way we're currently implementing this is by merely selecting 5000 (at most) primary keys from the table we're paging, storing those keys in memory, and then just using 20 primary keys at a time to fetch their details from the database. However, we want to get away from having to store these keys in memory and would much prefer a database cursor that we just keep going back to and moving backwards and forwards over the cursor to generate pages. I tried doing this with Hibernate's ScrollableResults but found that I could not call methods like next() and previous() would cause an exception if you within a different web request / Hibernate session (no surprise there). Is there any way to reattach a ScrollableResults object to a Session, much the same way you would reattach a detached database object to make it persistent? Are there any other approaches to implement this data paging with consistent paging results without caching the primary keys?

    Read the article

  • How can I make hundreds of simultaneously running processes communicate with a database through one

    - by Olfan
    Long speech short: How can I make hundreds of simultaneously running processes communicate with a database through one or few permanent sessions? The whole story: I once built a number crunching engine that handles vast amounts of large data files by forking off one child after another giving each a small number of files to work on. File locking, progress monitoring and result propagation happen in an Oracle database which all (sub-)processes access at various times using an application-specific module which encapsulates DBI. This worked well at first, but now with higher volumes of input data, the number of database sessions (one per child, and they can be very short-lived) constantly being opened and closed is becoming an issue. I now want to centralise database access so that there are only one or few fixed database sessions which handle all database access for all the (sub-)processes. The presence of the database abstraction module should make the changes easy because the function calls in the worker instances can stay the same. My problem is that I cannot think of a suitable way to enhance said module in order to establish communication between all the processes and the database connector(s). I thought of message queueing, but couldn't come up with a way of connecting a large herd of requestors with one or few database connectors in a way so that bidirectional communication is possible (for collecting the query result). An asynchronous approach could help here in that all requests are written to the same queue and the database connector servicing the request will "call back" to submit the result. But my mind fails me in generating an image clear enough so that I can paint into code. Threading instead of forking might have given me an easier start, but this would now require massive changes to the code base that I'm not prepared to do to a live system. The more I think of it, the more the base idea looks like a pre-forked web server to me only that it doesn't serve web pages but database queries. Any ideas on what to dig into, and where? Sample (pseudo) code to inspire me, links to possibly related articles, ready solutions on CPAN maybe?

    Read the article

  • How to work with CTE. There is some error related to anchor.

    - by Shantanu Gupta
    I am creating a hierarchy representaion of a column. But an error occurs Details are Msg 240, Level 16, State 1, Line 1 Types don't match between the anchor and the recursive part in column "DISPLAY" of recursive query "CTE". I know there is some typecasting error. But I dont know how to remove error. Please just dont only sort out my error. I need explanation why this error is coming. When this error occurs. I am trying to sort table on the basis of sort col that i m introducing. I want to add '-' at every level and want to sort accordingly. Please help WITH CTE (PK_CATEGORY_ID, [DESCRIPTION], FK_CATEGORY_ID, DISPLAY, SORT, DEPTH) AS ( SELECT PK_CATEGORY_ID, [DESCRIPTION], FK_CATEGORY_ID, '-' AS DISPLAY, '--' AS SORT, 0 AS DEPTH FROM dbo.L_CATEGORY_TYPE WHERE FK_CATEGORY_ID IS NULL UNION ALL SELECT T.PK_CATEGORY_ID, T.[DESCRIPTION], T.FK_CATEGORY_ID, CAST(DISPLAY+T.[DESCRIPTION] AS VARCHAR(1000)), '--' AS SORT, C.DEPTH +1 FROM dbo.L_CATEGORY_TYPE T JOIN CTE C ON C.PK_CATEGORY_ID = T.FK_CATEGORY_ID --SELECT T.PK_CATEGORY_ID, C.SORT+T.[DESCRIPTION], T.FK_CATEGORY_ID --, CAST('--' + C.SORT AS VARCHAR(1000)) AS SORT, CAST(DEPTH +1 AS INT) AS DEPTH --FROM dbo.L_CATEGORY_TYPE T JOIN CTE C ON C.FK_CATEGORY_ID = T.PK_CATEGORY_ID ) SELECT PK_CATEGORY_ID, [DESCRIPTION], FK_CATEGORY_ID, DISPLAY, SORT, DEPTH FROM CTE ORDER BY SORT

    Read the article

  • managing classes when everything is relative to a user in nhibernate (orm)

    - by Schotime
    Firstly I have three entities. Users, Roles, Items A user can have multiple Roles. An item gets assigned to one or more roles. Therefore a user will have access to a distinct set of items. Now there is a few ways I can see this working. There is a Collection on Users which has Roles via a many-to-many assoc. Then each Role in this collection will have its own collection of Items. So for each user I would have to get the User (using nhib and fetch the roles and items with it) then either do a selectMany on the Items in each Role to get all the Items for the user or do a couple of foreach's to port the data to a view or dto model. Create a db trigger to automatically insert into another table that just has the relationship between user and items so that on my User entity I only have a Items collections which has all the items assigned to me. Some other way that i can't think of yet, because I'm new to nHibernate. Now i know that the trigger doesn't feel right but I'm not sure how to do this. We also have some hierarchy later where a user may be in charge of a group of users. If anyone could shed some light on how they go about these scenarios in nhibernate or another orm that would be great, or point be in a direction. I know that in the past you would have to enter all combinations into a table so that the query worked, but when you know sql its not too bad. If you need any other info then let me know. Cheers

    Read the article

  • can I add properties to a typo3 extbase domain model object?

    - by The Newbie Qs
    I want to store a username in a coupon object, each coupon object already has the uid of the user who created it. I can loop over the coupon objects and read the associated usernames from fe_users but how then will I save those usernames into the coupons so when they are passed to the template the usernames can be read like so coupon.username, or in some other easy way so each username will appear on the page with the right coupon as they are all printed out in a table? If I was doing basic php instead of typo3 i would just define a query but what is the typo3 v4.5 way? My code so far - which dies on the line where I try to assign the new property --creatorname -- to the $coup object. public function listAction() { $coupons = $this->couponRepository->findAll(); // @var Tx_Extbase_Domain_Repository_FrontendUserRepository $userRepository */ $userRepository = $this->objectManager->get("Tx_Extbase_Domain_Repository_FrontendUserRepository"); foreach ($coupons as $coup) { echo '<br />test '.$coup->getCreator(); echo '<br />count = '.$userRepository->countAll().'<br />'; $newObject = $userRepository->findByUid( intval($coup->getCreator())); //var_dump($newObject); var_dump($coup); echo '<br />getUsername '.$newObject->getUsername() ; $coup['creatorname'] = $newObject->getUsername(); echo '<br />creatorname '.$coup['creatorname'] ; } $this->view->assign('coupons', $coupons); }

    Read the article

  • Having a problem inserting into database

    - by neo skosana
    I have a stored procedure: CREATE PROCEDURE busi_reg(IN instruc VARCHAR(10), IN tble VARCHAR(20), IN busName VARCHAR(50), IN busCateg VARCHAR(100), IN busType VARCHAR(50), IN phne VARCHAR(20), IN addrs VARCHAR(200), IN cty VARCHAR(50), IN prvnce VARCHAR(50), IN pstCde VARCHAR(10), IN nm VARCHAR(20), IN surname VARCHAR(20), IN eml VARCHAR(50), IN pswd VARCHAR(20), IN srce VARCHAR(50), IN refr VARCHAR(50), IN sess VARCHAR(50)) BEGIN INSERT INTO b_register SET business_name = busName, business_category = busCateg, business_type = busType, phone = phne, address = addrs, city = cty, province = prvnce, postal_code = pstCde, firstname = nm, lastname = surname, email = eml, password = pswd, source = srce, ref_no = refr; END; This is my php script: $busName = $_POST['bus_name']; $busCateg = $_POST['bus_cat']; $busType = $_POST['bus_type']; $phne = $_POST['phone']; $addrs = $_POST['address']; $cty = $_POST['city']; $prvnce = $_POST['province']; $pstCde = $_POST['postCode']; $nm = $_POST['name']; $surname = $_POST['lname']; $eml = $_POST['email']; $srce = $_POST['source']; $ref = $_POST['ref_no']; $result2 = $db->query("CALL busi_reg('$instruc', '$tble', '$busName', '$busCateg', '$busType', '$phne', '$addrs', '$cty', '$prvnce', '$pstCde', '$nm', '$surname', '$eml', '$pswd', '$srce', '$refr', '')"); if($result) { echo "Data has been saved"; } else { printf("Error: %s\n",$db->error); } Now the error that I get: Commands out of sync; you can't run this command now

    Read the article

  • Silverlight 4 webclient authentication - anyone have this working yet?

    - by Toran Billups
    So one of the best parts about the new Silverlight 4 beta is that they finally implemented the big missing feature of the networking stack - Network Credentials! In the below I have a working request setup, but for some reason I get a "security error" when the request comes back - is this because twitter.com rejected my api call or something that I'm missing in code? It might be good to point out that when I watch this code execute via fiddler it shows that the xml file for cross domain is pulled down successfully, but that is the last request shown by fiddler ... public void RequestTimelineFromTwitterAPI() { WebRequest.RegisterPrefix("https://", System.Net.Browser.WebRequestCreator.ClientHttp); WebClient myService = new WebClient(); myService.AllowReadStreamBuffering = true; myService.UseDefaultCredentials = false; myService.Credentials = new NetworkCredential("username", "password"); myService.UseDefaultCredentials = false; myService.OpenReadCompleted += new OpenReadCompletedEventHandler(TimelineRequestCompleted); myService.OpenReadAsync(new Uri("https://twitter.com/statuses/friends_timeline.xml")); } public void TimelineRequestCompleted(object sender, System.Net.OpenReadCompletedEventArgs e) { //anytime I query for e.Result I get a security error }

    Read the article

  • Approach for caching data from data logger

    - by filip-fku
    Greetings, I've been working on a C#.NET app that interacts with a data logger. The user can query and obtain logs for a specified time period, and view plots of the data. Typically a new data log is created every minute and stores a measurement for a few parameters. To get meaningful information out of the logger, a reasonable number of logs need to be acquired - data for at least a few days. The hardware interface is a UART to USB module on the device, which restricts transfers to a maximum of about 30 logs/second. This becomes quite slow when reading in the data acquired over a number of days/weeks. What I would like to do is improve the perceived performance for the user. I realize that with the hardware speed limitation the user will have to wait for the full download cycle at least the first time they acquire a larger set of data. My goal is to cache all data seen by the app, so that it can be obtained faster if ever requested again. The approach I have been considering is to use a light database, like SqlServerCe, that can store the data logs as they are received. I am then hoping to first search the cache prior to querying a device for logs. The cache would be updated with any logs obtained by the request that were not already cached. Finally my question - would you consider this to be a good approach? Are there any better alternatives you can think of? I've tried to search SO and Google for reinforcement of the idea, but I mostly run into discussions of web request/content caching. Thanks for any feedback!

    Read the article

  • Oracle User definied aggregate function for varray of varchar

    - by baju
    I am trying to write some aggregate function for the varray and I get this error code when I'm trying to use it with data from the DB: ORA-00600 internal error code, arguments: [kodpunp1], [], [], [], [], [], [], [], [], [], [], [] [koxsihread1], [0], [3989], [45778], [], [], [], [], [], [], [], [] Code of the function is really simple(in fact it does nothing ): create or replace TYPE "TEST_VECTOR" as varray(10) of varchar(20) ALTER TYPE "TEST_VECTOR" MODIFY LIMIT 4000 CASCADE create or replace type Test as object( lastVector TEST_VECTOR, STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number, MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number, MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number, MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number ); create or replace type body Test is STATIC FUNCTION ODCIAggregateInitialize(sctx in out Test) return number is begin sctx := Test(TEST_VECTOR()); return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateIterate(self in out Test, value in TEST_VECTOR) return number is begin self.lastVector := value; return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateMerge(self IN OUT Test, ctx2 IN Test) return number is begin return ODCIConst.Success; end; MEMBER FUNCTION ODCIAggregateTerminate(self IN Test, returnValue OUT TEST_VECTOR, flags IN number) return number is begin returnValue := self.lastVector; return ODCIConst.Success; end; end; create or replace FUNCTION test_fn (input TEST_VECTOR) RETURN TEST_VECTOR PARALLEL_ENABLE AGGREGATE USING Test; Next I create some test data: create table t1_test_table( t1_id number not null, t1_value TEST_VECTOR not null, Constraint PRIMARY_KEY_1 PRIMARY KEY (t1_id) ) Next step is to put some data to the table insert into t1_test_table (t1_id,t1_value) values (1,TEST_VECTOR('x','y','z')) Now everything is prepared to perform queries: Select test_fn(TEST_VECTOR('y','x')) from dual Query above work well Select test_fn(t1_value) from t1_test_table where t1_id = 1 Version of Oracle DBMS I use: 11.2.0.3.0 Does anyone tried do such a thing? What can be the reason that it does not work? How to solve it? Thanks in advance for help.

    Read the article

  • Bad idea to have the same object, have a different side effect after method call.

    - by Nathan W
    Hi all, I'm having a bit of a gesign issue(again). Say I have this Buttonpad object: now this object is a wrapper object over one in a com object. At the moment it has a method on it called CreateInto(IComObject). Now to make a new button pad in the Com Object. You do: ButtonPad pad = new ButtonPad(); pad.Title = "Hello"; // Set some more properties. pad.CreateInto(Cominstance); The createinfo method will excute the right commands to buid the button pad in the com object. After it has been created it any calls against it are foward to the underlying object for change so: pad.Title = "New title"; will call the com object to set the title and also set the internal title variable. Basically any calls before the CreateInfo method only affect the .NET object anything after has the side effect of calling the com object also. I'm not very good at sequence diagrams but here is my attempt to explain whats going on: This doesn't feel good to me, it feels like I'm lying to the user about what the button pad does. I was going to have a object called WrappedButtonPad, which is returned from CreateInto and the user could make calls against that to make changes to the Com Object, but I feel having two objects that almost do the same thing but only differ by names might be even worse. Are these valid designs, or am I right to be worried? How else would you handle a object the can create and query a com object?

    Read the article

  • How can I make Access think there is a primary key

    - by user3692757
    I have a table and I'm trying to join it with another table, but it doesn't have a distinctive primary key. The two tables do share similarities, “Acct” and “Location”. If I could concatenate “Acct&Location” it would become a primary key, but Access won’t let me make a primary key from a calculation. I provided a small sample below. Each hospital has an “Acct”, but the “Acct” will show up once for each “Location”. How can I make join these in a relationship? I connected the two in a relationships and tried to “Enforce Referential Integrity”, but it indicated “No unique index found for the referenced field of the primary key”. Also, if I run a “Find UnMatched Query” it doesn’t find anything. I think its because I can’t make it realize that in combination “Acct” and “Location” can be perceived as primary keys when used in conjunction of each other. Acct 1 2 3 1 2 3 1 2 3| Location ABI ABI ABI NHO NHO NHO NTX NTX NTX I tried to load an image to illustrate it better, but I haven't made enough post.

    Read the article

  • Advice on Linq to SQL mapping object design

    - by fearofawhackplanet
    I hope the title and following text are clear, I'm not very familiar with the correct terms so please correct me if I get anything wrong. I'm using Linq ORM for the first time and am wondering how to address the following. Say I have two DB tables: User ---- Id Name Phone ----- Id UserId Model The Linq code generator produces a bunch of entity classes. I then write my own classes and interfaces which wrap these Linq classes: class DatabaseUser : IUser { public DatabaseUser(User user) { _user = user; } public Guid Id { get { return _user.Id; } } ... etc } so far so good. Now it's easy enough to find a users phones from Phones.Where(p => p.User = user) but surely comsumers of the API shouldn't need to be writing their own Linq queries to get at data, so I should wrap this query in a function or property somewhere. So the question is, in this example, would you add a Phones property to IUser or not? In other words, should my interface specifically be modelling my database objects (in which case Phones doesn't belong in IUser), or are they actually simply providing a set of functions and properties which are conceptually associated with a User (in which case it does)? There seems drawbacks to both views, but I'm wondering if there is a standard approach to the problem. Or just any general words of wisdom you could share. My first thought was to use extension methods but in fact that doesn't work in this case.

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >