Search Results

Search found 18695 results on 748 pages for 'query manipulation'.

Page 443/748 | < Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >

  • Oracle Date Format Convert Hour-Minute to Interval and Disregard Year-Month-Day

    - by dlite922
    I need to compare an event's half-way midpoint between a start and stop time of day. Right now i'm converting the dates you see on the right, to HH:MM and the comparison works until midnight. the query says: WHERE half BETWEEN pStart and pStop. As you can see below, pStart and pStap have January 1st 2000 dates, this is because the year month day are not important to me... Valid Data: +-------+--------+-------+---------------------+---------------------+---------------------+ | half | pStart | pStop | half2 | pStart2 | pStop2 | +-------+--------+-------+---------------------+---------------------+---------------------+ | 19:00 | 19:00 | 23:00 | 2012-11-04 19:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 20:00 | 19:00 | 23:00 | 2012-11-04 20:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 21:00 | 19:00 | 23:00 | 2012-11-04 21:00:00 | 2000-01-01 19:00:00 | 2000-01-01 23:00:00 | | 23:00 | 20:00 | 23:00 | 2012-11-05 23:00:00 | 2000-01-01 20:00:00 | 2000-01-01 23:00:00 | +-------+--------+-------+---------------------+---------------------+---------------------+ Now observe what happens when pStop is midnight or later... Valid Data that breaks it: +-------+--------+-------+---------------------+---------------------+---------------------+ | half | pStart | pStop | half2 | pStart2 | pStop2 | +-------+--------+-------+---------------------+---------------------+---------------------+ | 23:00 | 22:00 | 00:00 | 2012-11-04 23:00:00 | 2000-01-01 22:00:00 | 2000-01-01 00:00:00 | | 23:30 | 23:00 | 02:00 | 2012-11-05 23:30:00 | 2000-01-01 23:00:00 | 2000-01-01 02:00:00 | +-------+--------+-------+---------------------+---------------------+---------------------+ Thus my where clause translates to: WHERE 19:00 BETWEEN 22:00 AND 00:00 ...which returns false and I miss those two correct rows above. Question: Is there a way to show those dates as integer interval so that saying half BETWEEN pStart and pStop are correct? I thought about adding 24 when pStop is less than pStart to make 00:00 into 24:00 but don't know an easy way to do that without long string concatenations and number conversions. This would solve the problem because pStart pStop difference will never be longer than 6 hours. Note: (The Query is much more complex. It has other irrelevant date calculations, but the result are show above. DATE_FORMAT(%H:%i) is applied to the first three columns and no formatting to the last three) Thanks for your help:

    Read the article

  • How to mix Grammar (Rules) & Dictation (Free speech) with SpeechRecognizer in C#

    - by Lee Englestone
    I really like Microsofts latest speech recognition (and SpeechSynthesis) offerings. http://msdn.microsoft.com/en-us/library/ms554855.aspx http://estellasays.blogspot.com/2009/04/speech-recognition-in-cnet.html However I feel like I'm somewhat limited when using grammars. Don't get me wrong grammars are great for telling the speech recognition exactly what words / phrases to look out for, however what if I want it to recognise something i've not given it a heads up about? Or I want to parse a phrase which is half pre-determined command name and half random words? For example.. Scenario A - I say "Google [Oil Spill]" and I want it to open Google with search results for the term in brackets which could be anything. Scenario B - I say "Locate [Manchester]" and I want it to search for Manchester in Google Maps or anything else non pre-determined I want it to know that 'Google' and 'Locate' are commands and what comes after it are parameters (and could be anything). Question : Does anyone know how to mix the use of pre-determined grammars (words the speech recognition should recognise) and words not in its pre-determined grammar? Code fragments.. using System.Speech.Recognition; ... ... SpeechRecognizer rec = new SpeechRecognizer(); rec.SpeechRecognized += rec_SpeechRecognized; var c = new Choices(); c.Add("search"); var gb = new GrammarBuilder(c); var g = new Grammar(gb); rec.LoadGrammar(g); rec.Enabled = true; ... ... void rec_SpeechRecognized(object sender, SpeechRecognizedEventArgs e) { if (e.Result.Text == "search") { string query = "How can I get a word not defined in Grammar recognised and passed into here!"; launchGoogle(query); } } ... ... private void launchGoogle(string term) { Process.Start("IEXPLORE", "google.com?q=" + term); }

    Read the article

  • Need to get 3 record for database on current date using sqlite

    - by Umaid
    SELECT rowid, Day, Advice from MainCategory where ((Day = ((cast(strftime('%d',date('now','-1 day')) as Integer)))) and (Month = (strftime('%m',date('now'))))) and ((Day = ((cast(strftime('%d',date('now')) as Integer)))) and (Month = (strftime('%m',date('now'))))) , ((Day = ((cast(strftime('%d',date('now','+1 day')) as Integer)))) and (Month = (strftime('%m',date('now',+1 month))))); What if i make my Month column in Integer data type then it would be. SELECT rowid, Month, Day, Advice from MainCategory where ((Day = ((cast(strftime('%d',date('now','-1 day')) as Integer)))) and (Month = (strftime('%m',date('now'))))) and ((Day = ((cast(strftime('%d',date('now')) as Integer)))) and (Month = (strftime('%m',date('now'))))) , ((Day = ((cast(strftime('%d',date('now','+1 day')) as Integer)))) and (Month = (strftime('%m',date('now',+1 month))))); Please note that I have over this scenerio when I am in middle of month but below query returns 2 records and 1 from begining from all 11 months as (feb is exclusive) then record will be 33 but i need three 3 records from the table and increment it on next button. Please write 3 querys one which return all three record on current date, next all 3 records must be incremented by 1 on every next button click finally all 3 records must be decremented by 1 on every previous button click keep last day and begining date on the month in minds else i have also achieved for middle of month. Running query but returns 33 records instead of 3. SELECT rowid,Month, Day, Advice from MainCategory where Day in ((cast(strftime('%d',date('now','-1 day')) as Integer)),(cast(strftime('%d',date('now')) as Integer)),(cast(strftime('%d',date('now','+1 day')) as Integer)));

    Read the article

  • Updating Data through Objects

    - by Chacha102
    So, lets say I have a record: $record = new Record(); and lets say I assign some data to that record: $record->setName("SomeBobJoePerson"); How do I get that into the database. Do I..... A) Have the module do it. class Record{ public function __construct(DatabaseConnection $database) { $this->database = $database; } public function setName($name) { $this->database->query("query stuff here"); $this->name = $name; } } B) Run through the modules at the end of the script class Record{ private $changed = false; public function __construct(array $data=array()) { $this->data = $data; } public function setName($name) { $this->data['name'] = $name; $this->changed = true; } public function isChanged() { return $this->changed; } public function toArray() { return $this->array; } } class Updater { public function update(array $records) { foreach($records as $record) { if($record->isChanged()) { $this->updateRecord($record->toArray()); } } } public function updateRecord(){ // updates stuff } }

    Read the article

  • Very weird C file-handling anomaly

    - by KáGé
    Hello, I got a very weird issue that I cant figure out in my school project, which is the simulation of a simple filesystem in a human-readable textfile. Unfortunately I don't yet have enough time to translate the comments in my code or make it less gibberish, so if you are bothered by that, you don't have to help, I understand. See the code HERE. Now in drive.h, at line 574 is this part: i = getline(); #ifdef DEBUG printf("Free space in all found at %d.\n\n", i); if(drive.disk != NULL){ printf("Disk OK\n\n"); } #endif //write in data state = seekline(i); Before this it finds place for the allocation database entry in the ALL sector (see the "image files" in the mounts folder, this issue was tested on mount_30.efs-dbf), then gets the line with i = getline() fine (getline is in lglobal.h, line 39), but after that any file manipulation (in this case seekline's fseek, but if I comment that out, then the first fprintf after that) crashes the program straight away. I think the file gets somehow corrupted (though the Disk OK message appears) but can't figure out how. I've tried putting i = getline(); into comment, but it didn't make any difference. I've also tried asking at local programming forums but they didn't really help either. The last few lines of the output before it crashes: Dir written. (drive.h line 562) Seekline entered: 268 (called at drive.h line 564) Getline entered. (called at drive.h line 574) Line got: 268. Free space in all found at 268. (drive.h line 576) Seekline entered: 268 (called at drive.h line 582, note that this exact call was run successfully less than 20 lines back. This one should set the pointer to the beginning of the line it is currently in) After this it crashes. Does anyone has any idea of what causes this and how could I fix it? Thank you.

    Read the article

  • Selecting data in clustered index order without ORDER BY

    - by kcrumley
    I know there is no guarantee without an ORDER BY clause, but are there any techniques to tune SQL Server tables so they're more likely to return rows in clustered index order, without having to specify ORDER BY every single time I want to run a super-quick ad hoc query? For example, would rebuilding my clustered index or updating statistics help? I'm aware that I can't count on a query like: select * from AuditLog where UserId = 992 to return records in the order of the clustered index, so I would never build code into an application based on this assumption. But for simple ad hoc queries, on almost all of my tables, the data consistently comes out in clustered index order, and I've gotten used to being able to expect the most recent results to be at the bottom. Out of all the many tables we use, I've only noticed two ever giving me results in an unpredicted order. This is really just an annoyance, but it would be nice to be able to minimize it. In case this is relevant because of page boundary issues or something like that, I should mention that one of the tables that has inconsistent ordering, the AuditLog table, is the longest table we have that has a clustered index on an identity column. Also, this database has recently been moved from SQL 2005 to SQL 2008, and we've seen no noticeable change in this behavior.

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • What can cause my code to run slower when the server JIT is activated?

    - by durandai
    I am doing some optimizations on an MPEG decoder. To ensure my optimizations aren't breaking anything I have a test suite that benchmarks the entire codebase (both optimized and original) as well as verifying that they both produce identical results (basically just feeding a couple of different streams through the decoder and crc32 the outputs). When using the "-server" option with the Sun 1.6.0_18, the test suite runs about 12% slower on the optimized version after warmup (in comparison to the default "-client" setting), while the original codebase gains a good boost running about twice as fast as in client mode. While at first this seemed to be simply a warmup issue to me, I added a loop to repeat the entire test suite multiple times. Then execution times become constant for each pass starting at the 3rd iteration of the test, still the optimized version stays 12% slower than in the client mode. I am also pretty sure its not a garbage collection issue, since the code involves absolutely no object allocations after startup. The code consists mainly of some bit manipulation operations (stream decoding) and lots of basic floating math (generating PCM audio). The only JDK classes involved are ByteArrayInputStream (feeds the stream to the test and excluding disk IO from the tests) and CRC32 (to verify the result). I also observed the same behaviour with Sun JDK 1.7.0_b98 (only that ist 15% instead of 12% there). Oh, and the tests were all done on the same machine (single core) with no other applications running (WinXP). While there is some inevitable variation on the measured execution times (using System.nanoTime btw), the variation between different test runs with the same settings never exceeded 2%, usually less than 1% (after warmup), so I conclude the effect is real and not purely induced by the measuring mechanism/machine. Are there any known coding patterns that perform worse on the server JIT? Failing that, what options are available to "peek" under the hood and observe what the JIT is doing there?

    Read the article

  • FreeTDS runs out of memory from DBD::Sybase

    - by skiphoppy
    When I add client charset = UTF-8 to my freetds.conf file, my DBD::Sybase program emits: Out of memory! and terminates. This happens when I call execute() on an SQL query statement that returns any ntext fields. I can return numeric data, datetimes, and nvarchars just fine, but whenever one of the output fields is ntext, I get this error. All these queries work perfectly fine without the UTF-8 setting, but I do need to handle some characters that throw warnings under the default character set. (See related question.) The error message is not formatted the same way other DBD::Sybase error messages seem to be formatted. I do get a message that a rollback() is being issued, though. (My false AutoCommit flag is being honored.) I think I read somewhere that FreeTDS uses the iconv program to convert between character sets; is it possible that this message is being emitted from iconv? If I execute the same query with the same freetds.conf settings in tsql (FreeTDS's command-line SQL shell), I don't get the error. I'm connecting to SQL Server. What do I need to do to get these queries to return successfully?

    Read the article

  • Group / User based security. Table / SQL question

    - by Brett
    Hi, I'm setting up a group / user based security system. I have 4 tables as follows: user groups group_user_mappings acl where acl is the mapping between an item_id and either a group or a user. The way I've done the acl table, I have 3 columns of note (actually 4th one as an auto-id, but that is irrelevant) col 1 item_id (item to access) col 3 user_id (user that is allowed to access) col 3 group_id (group that is allowed to access) So for example item1, peter, , item2, , group1 item3, jane, , so either the acl will give access to a user or a group. Any one line in the ACL table with either have an item - user mapping, or an item group. If I want to have a query that returns all objects a user has access to, I think I need to have a SQL query with a UNION, because I need 2 separate queries that join like.. item - acl - group - user AND item - acl - user This I guess will work OK. Is this how its normally done? Am I doing this the right way? Seems a little messy. I was thinking I could get around it by creating a single user group for each person, so I only ever deal with groups in my SQL, but this seems a little messy as well..

    Read the article

  • SSIS Transaction with Sql Transaction

    - by Mike
    I started with a package to make sure Transactions are working correctly. The package level transaction is set to Required. I have two Execute Sql Task, one deletes rows from a table and one does 1/0, to throw the error. Both task are set to supported transaction level and Serializable IsolationLevel. That works. Now when I replace my two sql task to two separate procedure calls, the first one, ChargeInterest, runs successful but the second one, PaymentProcess, fails always saying. [Execute SQL Task] Error: Executing the query "Exec [proc_xx_NotesReceivable_PaymentProcess] ..." failed with the following error: "Uncommittable transaction is detected at the end of the batch. The transaction is rolled back.". Possible failure reasons: Problems with the query, "ResultSet" property not set correctly, parameters not set correctly, or connection not established correctly. PaymentProcess being the second stored procedure. Both procedures have there own BEGIN, COMMIT AND ROLLBACKS inside the SP. I believe that the transactions are being successfully handed in the Charge Interest because I can run the following without issues or the dreaded you started with 0 and now have 1 transaction. EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 --OR GO BEGIN TRAN EXEC [proc_XX_NotesReceivable_ChargeInterest] 'NR', 'M', 186, 300 EXEC [proc_XX_NotesReceivable_PaymentProcess] 'NR', 186, 300 ROLLBACK TRAN Now I have noticed that DTC does get kicked off in both instances? Why I am not sure because it is using the same connection. In the live example I can see the transaction get started but disappears if I put a breakpoint on the PreExecute event of the second stored procedure. What is the correct way to mingle SP transactions with SSIS transactions?

    Read the article

  • Fast, easy, and secure method to perform DB actions with GET

    - by rob - not a robber
    Hey All, Sort of a methods/best practices question here that I am sure has been addressed, yet I can't find a solution based on the vague search terms I enter. I know starting off the question with "Fast and easy" will probably draw out a few sighs, so my apologies. Here is the deal. I have a logged in area where an ADMIN can do a whole host of POST operations to input data relating to their profile. The way I have data structured is pretty distinct and well segmented in most tables as it relates to the ID of the admin. Now, I have a table where I dump one type of data into and differentiate this data by assigning the ADMIN's unique ID to each record. In other words, all ADMINs have this one type of data writing to this table. I just differentiate by the ADMIN ID with each record. I was planning on letting the ADMIN remove these records by clicking on a link with a query string - obviously using GET. Obviously, the query structure is in the link so any logged in admin could then exploit the URL and delete a competitor's records. Is the only way to safely do this through POST or should I pass through the session info that includes password and validate it against the ADMIN ID that is requesting the delete? This is obviously much more work for me. As they said in the auto repair biz I used to work in... there are 3 ways to do a job: Fast, Good, and Cheap. You can only have two at a time. Fast and cheap will not be good. Good and cheap will not have fast turnaround. Fast and good will NOT be cheap. haha I guess that applies here... can never have Fast, Easy and Secure all at once ;) Thanks in advance...

    Read the article

  • Low-Hanging Fruit: Obfuscating non-critical values in JavaScript

    - by Piskvor
    I'm making an in-browser game of the type "guess what place/monument/etc. is in this satellite/aerial view", using Google Maps JS API v3. However, I need to protect against cheaters - you have to pass a google.maps.LatLng and a zoom level to the map constructor, which means a cheating user only needs to view source to get to this data. I am already unsetting every value I possibly can without breaking the map (such as center and the manipulation functions like setZoom()), and initializing the map in an anonymous function (so the object is not visible in global namespace). Now, this is of course in-browser, client-side, untrusted JavaScript; I've read much of the obfuscation tag and I'm not trying to make the script bullet-proof (it's just a game, after all). I only need to make the obfuscation reasonably hard against the 1337 Java5kryp7 haxz0rz - "kid sister encryption", as Bruce Schneier puts it. Anything harder than base64 encoding would deter most cheaters by eliminating the lowest-hanging fruit - if the cheater is smart and determined enough to use a JS debugger, he can bypass anything I can do (as I need to pass the value to Google Maps API in plaintext), but that's unlikely to happen on a mass scale (there will also be other, not-code-related ways to prevent cheating). I've tried various minimizers and obfuscators, but those will mostly deal with code - the values are still shown verbatim. TL;DR: I need to obfuscate three values in JavaScript. I'm not looking for bullet-proof armor, just a sneeze-guard. What should I use?

    Read the article

  • need help fixing unique key in rails. rails is adding id causing duplicate key

    - by railsnew
    I need some help in fixing the below issue. I had transaction blocks in my rails code like below: @sqlcontact = "INSERT INTO contacts (id,\"cid\", \"hphone\", mphone, provider, cemail, email, sms , mail, phone) VALUES ('"+@id1+"','" + @id1 + "', '"+ params[:hphone] + "', '"+params[:mphone]+ "', '" + params[:provider] + "', '" + params[:cemail]+ "', '" + @varemail+ "', '"+@varsms+ "', '"+ @varmail+"', '"+@varphone+"')" my app was deployed to heroku so I was advised by them to remove transaction blocks. So I changed the above to: @cont = Contact.new(:id => @id1, :cid => @id1, :hphone => params[:hphone], :mphone => params[:mphone], :provider => params[:provider], :cemail => params[:cemail], :email => @varemail, :sms => @varsms, :mail => @varmail, :phone => @varphone) @cont.save My app also already had data stored. Now the problem is that when I try to save a record ...I keep getting the error: duplicate key value violates unique constraint "contacts_pkey" The error also shows the sql query trying to insert data ...however, in that sql query i Do not see id value. As you can see from my code that I am passing the id. then why is rails not accepting it? does it always include its own sequential id? can I not overwrite the default rails magic? and if it does that...does it not look at data that is already in the DB?? I am really stuck here. What should I do? should I just go back to my transaction block

    Read the article

  • Nhibernate - stuck with detached criteria (asp.net mvc 1 with nhibernate 2) c#

    - by Jen
    OK so I can't find a good example of this so I can better understand how to use detached criteria (assuming that's what I want to use in the first place). I have 2 tables. Placement and PlacementSupervisor My PlacementSupervisor table has a FK of PlacementID which relates to Placement.PlacementID - though my nhibernate model class has PlacementSupervisor . Placement (rather than specifically specifying a property of placement ID - not sure if this is important). What I am trying to do is - if values are passed through for the supervisor ID I want to restrict placements with that supervisor id. Have tried: ICriteria query = m_PlacementRepository.QueryAlias("p") .... if (criteria.SupervisorId > 0 && !string.IsNullOrEmpty(criteria.SupervisorTypeId)) { DetachedCriteria entityQuery = DetachedCriteria.For<PlacementSupervisor>("sup") .Add(Restrictions.And( Restrictions.Eq("sup.supervisorId", criteria.SupervisorId), Restrictions.Eq("sup.supervisorTypeId", criteria.SupervisorTypeId) )) .SetProjection(Projections.ProjectionList() .AddPropertyAlias("Placement.PlacementId", "PlacementId") ); query.Add(Subqueries.PropertyIn("p.PlacementId", entityQuery)); } Which just gives me the error: Could not find a matching criteria info provider to: (sup.supervisorId = 5 and sup.supervisorTypeId = U) Firstly supervisorTypeId is a string. Secondly I don't understand how to achieve what I'm trying to do so have just been trying various combinations of projections, and property aliases and subquery options..as I don't get how I'm supposed to join to another table/entity when the FK key sits in the second table. Can someone point me in the right direction. It seems like such an easy thing to do from a data perspective that hopefully I'm just missing something obvious!!

    Read the article

  • Analyzing Web Application Speed

    - by Amy
    I'm a bit confused because the logical/programmer brain in me says that if all things are constant, the speed of a function must be constant. I am working on a PHP web application with jqGrid as a front end for showing the data. I am testing on my personal computer, so network traffic does not apply. I make an HTTP request to a PHP function, it returns the data, and then jqGrid renders it. What has me befuddled is that sometimes, Firebug reports that this is taking 300-600 milliseconds sometimes, and sometimes, it's taking 3.68 seconds. I can run the request over and over again, with very radically different response times. The query is the same. The number of users on the system is the same. No network latency. Same code. I'm not running other applications on the computer while testing. I could understand query caching improving performance on subsequent requests, but the speed is just fluctuating wildly with no rhyme or reason. So, my question is, what else can cause such variability in the response time? How can I determine what's doing it? More importantly, is there any way to get things more consistent?

    Read the article

  • How do I get this sql to linq? Multiple groups

    - by Dwight T
    For a db person, LINQ can be frustrating. I need to convert the following SQL into Linq. SELECT COUNT(o.objectiveid), COUNT(distinct r.ReviewId), l.Abbreviation FROM Objective o JOIN Review r on r.ReviewId = o.ReviewId and r.ReviewPeriodId = 3 and r.IsDeleted = 0 JOIN Position p on p.PositionId = r.EmployeePositionId and p.DivisionId = 2 JOIN Location l on l.LocationId = p.LocationId GROUP BY l.Abbreviation The group by nested example might be the way I have to go, but not sure. Doing one group by I have used the following code: var query = from rev in db.Reviews .Where(r => r.IsDeleted == false && r.ReviewPeriodId == reviewPeriodId) from obj in db.Objectives .Where(o => o.ReviewId == rev.ReviewId && o.IsDeleted == false) from pos in db.Positions .Where(p => rev.EmployeePositionId == p.PositionId && p.IsDeleted == false && p.DivisionId == divisionId ) from loc in db.Locations .Where(l => pos.LocationId == l.LocationId) group loc by loc.Abbreviation into locgroup select new ReportResults { KeyId = 0, Description = locgroup.Key, Count = locgroup.Count() }; return query.ToList(); What is the correct way? Thanks

    Read the article

  • Is it possible to order by a composite key with JPA and CriteriaBuilder

    - by Kjir
    I would like to create a query using the JPA CriteriaBuilder and I would like to add an ORDER BY clause. This is my entity: @Entity @Table(name = "brands") public class Brand implements Serializable { public enum OwnModeType { OWNER, LICENCED } @EmbeddedId private IdBrand id; private String code; //bunch of other properties } Embedded class is: @Embeddable public class IdBrand implements Serializable { @ManyToOne private Edition edition; private String name; } And the way I am building my query is like this: CriteriaBuilder cb = em.getCriteriaBuilder(); CriteriaQuery<Brand> q = cb.createQuery(Brand.class).distinct(true); Root<Brand> root = q.from(Brand.class); if (f != null) { f.addCriteria(cb, q, root); f.addOrder(cb, q, root, sortCol, ascending); } return em.createQuery(q).getResultList(); And here are the functions called: public void addCriteria(CriteriaBuilder cb, CriteriaQuery<?> q, Root<Brand> r) { } public void addOrder(CriteriaBuilder cb, CriteriaQuery<?> q, Root<Brand> r, String sortCol, boolean ascending) { if (ascending) { q.orderBy(cb.asc(r.get(sortCol))); } else { q.orderBy(cb.desc(r.get(sortCol))); } } If I try to set sortCol to something like "id.edition.number" I get the following error: javax.ejb.EJBException: java.lang.IllegalArgumentException: Unable to resolve attribute [id.name] against path Any idea how I could accomplish that? I tried searching online, but I couldn't find a hint about this... Also would be great if I could do a similar ORDER BY when I have a @ManyToOne relationship (for instance, "id.edition.number")

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • PreparedStatement question in Java against Oracle.

    - by fardon57
    Hi everyone, I'm working on the modification of some code to use preparedStatement instead of normal Statement, for security and performance reason. Our application is currently storing information into an embedded derby database, but we are going to move soon to Oracle. I've found two things that I need your help guys about Oracle and Prepared Statement : 1- I've found this document saying that Oracle doesn't handle bind parameters into IN clauses, so we cannot supply a query like : Select pokemon from pokemonTable where capacity in (?,?,?,?) Is that true ? Is there any workaround ? ... Why ? 2- We have some fields which are of type TIMESTAMP. So with our actual Statement, the query looks like this : Select raichu from pokemonTable where evolution = TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') What should be done for a prepared Statement ? Should I put into the array of parameters : 2500-12-31 or TO_TIMESTAMP('2500-12-31 00:00:00.000', 'YYYY-MM-DD HH24:MI:SS.FF') ? Thanks for your help, I hope my questions are clear ! Regards,

    Read the article

  • How do I programmatically insert rows into a Silverlight DataGrid without binding?

    - by j0rd4n
    I am using a common Silverlight DataGrid to display results from a search. The "schema" of the search can vary from query to query. To accommodate this, I am trying to dynamically populate the DataGrid. I can set explicitly set the columns, but I am having trouble setting the ItemSource. All of the MSDN examples set the ItemSource to a collection with a strong type (e.g. a Custom type with public properties matching the schema). The DataGrid then uses reflection to scour the strong type for public properties that will match the columns. Since my search results are dynamic, I cannot create a strong type to represent what comes back. Can I not just give the DataGrid an arbitrary list of objects so long as the number of objects in each list matches the number of columns? Anyone know if this is possible? I would like to do something similar to this: List<List<object>> myResults = <voodoo that populates the result list> myDataGrid.ItemsSource = myResults;

    Read the article

  • I am looking for an actual functional web browser control for .NET, maybe a C++ library

    - by Joshua
    I am trying to emulate a web browser in order to execute JavaScript code and then parse the DOM. The System.Windows.Forms.WebBrowser object does not give me the functionality I need. It let's me set the headers, but you cannot set the proxy or clear cookies. Well you can, but it is not ideal and messes with IE's settings. I've been extending the WebBrowser control pinvoking native windows functions so far, but it is really one hack on top of another. I can mess with the proxy and also clear cookies and such, but this control has its issues as I mentioned. I found something called WebKit .NET (http://webkitdotnet.sourceforge.net/), but I don't see support for setting proxies or cookie manipulation. Can someone recommend a c++/.NET/whatever library to do this: Basically tell me what I need to do to get an interface to similar this in .NET: // this should probably pause the current thread for the max timeout, // throw an exception on failure or return null w/e, VAGUELY similar to this string WebBrowserEmu::FetchBrowserParsedHtml(Uri url, WebProxy p, int timeoutSeconds, byte[] headers, byte[] postdata); void WebBrowserEmu::ClearCookies(); I am not responsible for my actions.

    Read the article

  • I am not able to update form data to MySQL using PHP and jQuery

    - by Jimson Jose
    My problem is that I am unable to update the values entered in the form. I have attached all the files. I'm using MYSQL database to fetch data. What happens is that I'm able to add and delete records from form using jQuery and PHP scripts to MYSQL database, but I am not able to update data which was retrieved from the database. The file structure is as follows: index.php is a file with jQuery functions where it displays form for adding new data to MYSQL using save.php file and list of all records are view without refreshing page (calling load-list.php to view all records from index.php works fine, and save.php to save data from form) - Delete is an function called from index.php to delete record from MySQL database (function calling delete.php works fine) - Update is an function called from index.php to update data using update-form.php by retriving specific record from MySQL table, (works fine) Problem lies in updating data from update-form.php to update.php (in which update query is written for MySQL) I have tried in many ways - at last I had figured out that data is not being transferred from update-form.php to update.php; there is a small problem in jQuery AJAX function where it is not transferring data to update.php page. Something is missing in calling update.php page it is not entering into that page. I am new bee in programming. I had collected this script from many forums and made this one. So I was limited in solving this problem. I came to know that this is good platform for me and many where we get a help to create new things. Please find the link below to download all files which is of 35kb (virus free assurance): download mysmallform files in ZIPped format, including mysql query

    Read the article

  • Access 2007 not allowing user to delete record in subform

    - by Todd McDermid
    Good day... The root of my issue is that there's no context menu allowing the user to delete a row from a form. The "delete" button on the ribbon is also disabled. In Access 2003, apparently this function was available, but since our recent "upgrade" to 2007 (file is still in MDB format) it's no longer there. Please keep in mind I'm not an Access dev, nor did I create this app - I inherited support for it. ;) Now for the details, and what I've tried. The form in question is a subform on a larger form. I've tried turning "AllowDeletes" on on both forms. I've checked the toolbar and ribbon properties on the forms to see if they loaded some custom stuff, but no. I've tried changing the "record locks" to "on edit", no joy. I examined the query to see if it was "too complicated" to permit a delete - as far as I can tell, it's a very simple two (linked) table join. Compared to another form in this app that does permit row deletes, it has a much more complicated (multi-join, built on queries) query. Is there a resource that would describe the required conditions for allowing deletes? Thanks in advance...

    Read the article

  • Heterogeneous queries require the ANSI_NULLS

    - by Dezigo
    I wrote a trigger. USE [TEST] GO /****** Object: Trigger [dbo].[TR_POSTGRESQL_UPDATE_YC] Script Date: 05/26/2010 08:54:03 ******/ SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO ALTER TRIGGER [dbo].[TR_POSTGRESQL_UPDATE_YC] ON [dbo].[BCT_CNTR_EVENTS] FOR INSERT AS BEGIN DECLARE @MOVE_TIME varchar(14); DECLARE @MOVE_TIME_FORMATED varchar(20); DECLARE @RELEASE_NOTE varchar(32); DECLARE @CMR_NUMBER varchar(15); DECLARE @MOVE_TYPE varchar(2); SELECT @MOVE_TIME = inserted.move_time ,@MOVE_TYPE = inserted.move_type ,@RELEASE_NOTE = inserted.release_note ,@CMR_NUMBER = inserted.cmr_number FROM inserted IF(@MOVE_TYPE = 'YC') BEGIN SET @MOVE_TIME_FORMATED = SUBSTRING(@MOVE_TIME,1,4) + '-' + SUBSTRING(@MOVE_TIME,5,2) + '-' + SUBSTRING(@MOVE_TIME,7,2) + ' 00:00:00' --UPDATE OpenQuery(POSTGRESQL_SERV,'SELECT visit_cmr,visit_timestamp,visit_pin FROM VISIT') -- SET visit_cmr = @RELEASE_NOTE -- WHERE visit_timestamp = @MOVE_TIME_FORMATED -- AND visit_pin = right(@CMR_NUMBER,5) -- AND visit_cmr IS NULL END SET NOCOUNT ON; END When I have inserted a row,I have get an error **Heterogeneous queries require the ANSI_NULLS and ANSI_WARNINGS options to be set for the connection. This ensures consistent query semantics. Enable these options and then reissue your query.** Then I ofcourse SET SET ANSI_WARNINGS is ON but it`s not work for me. (trigger fo linked server postgresql) I have restarted a server. not work again.

    Read the article

< Previous Page | 439 440 441 442 443 444 445 446 447 448 449 450  | Next Page >