Search Results

Search found 10602 results on 425 pages for 'media queries'.

Page 369/425 | < Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >

  • Suggestion on UPnP presentation

    - by Microkernel
    Hi all, I am working on an embedded device (bit higher end in terms of system resources but still an embedded one) which has lot of media content in it. I am trying to make it UPnP complaint and want to be able to control this device using a UPnP complaint control point/companion device like ipad. The step towards this is to be able to present the playlist content to the user. We thought of using HTML5 as a format to use. But as I am a noob in web technologies, I am not sure whats the best way to produce and present rich dynamic web pages. The content thats presented are video/audio listing that device can play and want this listing to be generated using the user's input criteria. So, what would be the best way to generate these dynamic pages which are rich and rendered as HTML5 pages. (looked at XML & XSLT, but there seems to be some limitations in how well one can use XSLT from some rewviews I saw). Thanks Microkernel PS: This may be silly or very basic as I am a embedded systems developer and not even a noob in web technologoes...

    Read the article

  • GROUP BY ID range?

    - by d0ugal
    Given a data set like this; +-----+---------------------+--------+ | id | date | result | +-----+---------------------+--------+ | 121 | 2009-07-11 13:23:24 | -1 | | 122 | 2009-07-11 13:23:24 | -1 | | 123 | 2009-07-11 13:23:24 | -1 | | 124 | 2009-07-11 13:23:24 | -1 | | 125 | 2009-07-11 13:23:24 | -1 | | 126 | 2009-07-11 13:23:24 | -1 | | 127 | 2009-07-11 13:23:24 | -1 | | 128 | 2009-07-11 13:23:24 | -1 | | 129 | 2009-07-11 13:23:24 | -1 | | 130 | 2009-07-11 13:23:24 | -1 | | 131 | 2009-07-11 13:23:24 | -1 | | 132 | 2009-07-11 13:23:24 | -1 | | 133 | 2009-07-11 13:23:24 | -1 | | 134 | 2009-07-11 13:23:24 | -1 | | 135 | 2009-07-11 13:23:24 | -1 | | 136 | 2009-07-11 13:23:24 | -1 | | 137 | 2009-07-11 13:23:24 | -1 | | 138 | 2009-07-11 13:23:24 | 1 | | 139 | 2009-07-11 13:23:24 | 0 | | 140 | 2009-07-11 13:23:24 | -1 | +-----+---------------------+--------+ How would I go about grouping the results by day 5 records at a time. The above results is part of the live data, there is over 100,000 results rows in the table and its growing. Basically I want to measure the change over time, so want to take a SUM of the result every X records. In the real data I'll be doing it ever 100 or 1000 but for the data above perhaps every 5. If i could sort it by date I would do something like this; SELECT DATE_FORMAT(date, '%h%i') ym, COUNT(result) 'Total Games', SUM(result) as 'Score' FROM nn_log GROUP BY ym; I can't figure out a way of doing something similar with numbers. The order is sorted by the date but I hope to split the data up every x results. It's safe to assume there are no blank rows. Doing it above with the data you could do multiple selects like; SELECT SUM(result) FROM table LIMIT 0,5; SELECT SUM(result) FROM table LIMIT 5,5; SELECT SUM(result) FROM table LIMIT 10,5; Thats obviously not a very good way to scale up to a bigger problem. I could just write a loop but I'd like to reduce the number of queries.

    Read the article

  • Using memcache together with conventional cache

    - by Industrial
    Hi! Here's the deal. We would have taken the complete static html road to solve performance issues, but since the site will be partially dynamic, this won't work out for us. What we have thought of instead is using memcache + eAccelerator to speed up PHP and take care of caching for the most used data. Here's our two approaches that we have thought of right now: Using memcache on all<< major queries and leaving it alone to do what it does best. Usinc memcache for most commonly retrieved data, and combining with a standard harddrive-stored cache for further usage. The major advantage of only using memcache is of course the performance, but as users increases, the memory usage gets heavy. Combining the two sounds like a more natural approach to us, even though the theoretical compromize in performance. Memcached appears to have some replication features available as well, which may come handy when it's time to increase the nodes. What approach should we use? - Is it stupid to compromize and combine the two methods? Should we insted be focusing on utilizing memcache and instead focusing on upgrading the memory as the load increases with the number of users? Thanks a lot!

    Read the article

  • MYSQL - SELECT ALL FROM TABLE if...

    - by hornetbzz
    Hello I have a (nice) mysql table built like this : Fields Datas id (pk) 1 2 3 4 5 6 master_id 1000 1000 1000 2000 2000 2000 ... master_name home home home shop shop shop ... type_data value common client value common client ... param_a foo_a 1 0 bar_a 0 1 ... param_b foo_b 1 0 bar_b 1 0 ... param_c foo_c 0 1 bar_c 0 1 ... ... ... ... ... ... ... ... ... All these datas are embed in a single table. Each datas are dispatched on 3 "columns" set (1 for the values, 1 for identifying if these are common values and one for identifying client values). It's not the best I got but many other scripts depends on this structure. I'd need sthg like this: SELECT parameters name (eg param_a, param_b..) and their values (eg foo_a, foo_b..) WHEN master_id=? AND type_data=(common or client) (eg for values=1 on the 2nd column) . in order to get the parameters hash like param_a => foo_a param_b => foo_b param_c => foo_c ... I could not succeed in self joining on the same table till now but I guess it should be feasible. (I'd like to avoid to do several queries) Thx in advance

    Read the article

  • Can anyone share a code snippet to Update Google Documents

    - by Sana
    Hi, I am relentlessly trying to update an existing google doc with the Google Protocol Data API, but the contents do not get updated, even though the PUT runs perfectly fine with a return response code of 200. Here is the code that I am using try { HttpRequest requestPost = transport.buildPutRequest(); requestPost.url = DocsUrl.forUploadingFile(editLink); ((GoogleHeaders) requestPost.headers).setSlugFromFileName("books1.xml"); InputStreamContent content = new InputStreamContent(); File file = new File("//sdcard/books.xml"); content.setFileInput(file); content.type = "text/plain"; content.length = file.length(); System.out.println("Length of the file = "+content.length); requestPost.content = content; HttpResponse responseUpload = requestPost.execute(); System.out.println("Uploading code = "+responseUpload.statusCode); } catch (FileNotFoundException e) { e.printStackTrace(); } catch (ClientProtocolException e) { System.out.println("Client Protocol Exception"); } catch (IOException e) { handleException(e); } where editLink is the editLink edit-media link returned from google doc feeds.

    Read the article

  • Advance: Parsing XML into another XML page using only javascript or jquery; Can't use PhP, Java or MySQL

    - by UrBestFriend
    Current site: http://cardwall.tk/ Example of intended outcome: http://www.shockwave.com/downloadWall.jsp I have an embeded flash object that uses XML/RSS (Picasa) to feed itself pictures. Now I created my own XML/RSS feed so that I can add additional XML tags and values. Now here's my big problem: enabling search. Since I'm not relying on Picasa's API anymore to return custom RSS/XML for the user's search, how can I create xml from another xml based on the user's search queries using only JavaScript and Jquery? Here is the current code: <script type="text/javascript"> var flashvars = { feed : "http%3A%2F%2Frssfeed.ucoz.com%2Frssfeed.xml", backgroundColor : "#FFFFFF", metadataFont : "Arial", wmode : "opaque", iFrameScrolling: "no", numRows : "3", }; var params = { allowFullScreen: "true", allowscriptaccess : "always", wmode: "opaque" }; swfobject.embedSWF("http://apps.cooliris.com/embed/cooliris.swf", "gamewall", "810", "410", "9.0.0", "", flashvars, params); $(document).ready(function() { $("#cooliris input").keydown(function(e) { if(e.keyCode == 13) { $("#cooliris a#searchCooliris").click(); return false; } }); doCoolIrisSearch = function() { cooliris.embed.setFeedContents( '** JAVA STRING OF PARSED RSS/XML based on http%3A%2F%2Frssfeed.ucoz.com%2Frssfeed.xml and USER'S SEARCH INPUT** ' ) }); <form id="searchForm" name="searchForm" class="shockwave"> <input type="text" name="coolIrisSearch" id="coolIrisSearch" value="Search..." class="field text short" onfocus="this.value='';" /> <a id="searchCooliris" href="#" onclick="doCoolIrisSearch();return false;" class="clearLink">Search Cooliris</a> </form> <div id="gamewall"></div> So basically, I want to replace cooliris.embed.setFeedContents's value with a Javastring based on the parsed RSS/XML and user search input. Any code or ideas would be greatly appreciated.

    Read the article

  • Beginner Question: For extract a large subset of a table from MySQL, how does Indexing, order of tab

    - by chongman
    Sorry if this is too simple, but thanks in advance for helping. This is for MySQL but might be relevant for other RDMBSs tblA has 4 columns: colA, colB, colC, mydata, A_id It has about 10^9 records, with 10^3 distinct values for colA, colB, colC. tblB has 3 columns: colA, colB, B_id It has about 10^4 records. I want all the records from tblA (except the A_id) that have a match in tblB. In other words, I want to use tblB to describe the subset that I want to extract and then extract those records from tblA. Namely: SELECT a.colA, a.colB, a.colC, a.mydata FROM tblA as a INNER JOIN tblB as b ON a.colA=b.colA a.colB=b.colB ; It's taking a really long time (more than an hour) on a newish computer (4GB, Core2Quad, ubuntu), and I just want to check my understanding of the following optimization steps. ** Suppose this is the only query I will ever run on these tables. So ignore the need to run other queries. Now my questions: 1) What indexes should I create to optimize this query? I think I just need a multiple index on (colA, colB) for both tables. I don't think I need separate indexes for colA and colB. Another stack overflow article (that I can't find) mentioned that when adding new indexes, it is slower when there are existing indexes, so that might be a reason to use the multiple index. 2) Is INNER JOIN correct? I just want results where a match is found. 3) Is it faster if I join (tblA to tblB) or the other way around, (tblB to tblA)? This previous answer says that the optimizer should take care of that. 4) Does the order of the part after ON matter? This previous answer say that the optimizer also takes care of the execution order.

    Read the article

  • Best way to dynamically get column names from oracle tables

    - by MNC
    Hi, We are using an extractor application that will export data from the database to csv files. Based on some condition variable it extracts data from different tables, and for some conditions we have to use UNION ALL as the data has to be extracted from more than one table. So to satisfy the UNION ALL condition we are using nulls to match the number of columns. Right now all the queries in the system are pre-built based on the condition variable. The problem is whenever there is change in the table projection (i.e new column added, existing column modified, column dropped) we have to manually change the code in the application. Can you please give some suggestions how to extract the column names dynamically so that any changes in the table structure do not require change in the code? My concern is the condition that decides which table to query. The variable condition is like if the condition is A, then load from TableX if the condition is B then load from TableA and TableY. We must know from which table we need to get data. Once we know the table it is straightforward to query the column names from the data dictionary. But there is one more condition, which is that some columns need to be excluded, and these columns are different for each table. I am trying to solve the problem only for dynamically generating the list columns. But my manager told me to make solution on the conceptual level rather than just fixing. This is a very big system with providers and consumers constantly loading and consuming data. So he wanted solution that can be general. So what is the best way for storing condition, tablename, excluded columns? One way is storing in database. Are there any other ways? If yes what is the best? As I have to give at least a couple of ideas before finalizing. Thanks,

    Read the article

  • Online file storage similar to Amazon S3

    - by Joel G
    I am looking to code a file storage application in perl similar to amazon s3. I already have a amazon s3 clone that I found online called parkplace but its in ruby and is old also isn't built for high loads. I am not really sure what modules and programs I should use so id like some help picking them out. My requirements are listed below (yes I know there are lots but I could start simple then add more once I get it going): Easy API implementation for client side apps. (maybe RESTful but extras like mkdir and cp (?) Centralized database server for the USERDB (maybe PostgreSQL (?). Logging of all connections, bandwidth used, well pretty much everything to a centralized server (maybe PostgreSQL again (?). Easy server side configuration (config file(s) stored on the servers). Web based control panel for admin(s) and user(s) to show logs. (could work just running queries from the databases) Fast High Uptime Low memory usage Some sort of load distribution/load balancer (maybe a dns based or pound or perlbal or something else (?). Maybe a cache of some sort (memcached or parlbal or something else (?). Thanks in advance

    Read the article

  • Help with accessing a pre-existing window AFTER opener is refreshed!

    - by Wilhelm Murdoch
    Alright, I'm at my wit's end on this issue. First, backstory. I'm working on a video management system where we're allowing users, when adding new content, to upload and, optionally, transcode a media file. We're using Java applet for the browser-based FTP client. What I want to do is allow a user to initiate an upload and then send the FTP connection instance to a popup window. This window will act as a job queue for the FTP transfer process. This will allow users to move about the main interface without having to stay on the original page until an individual file transfer is complete. For the most part I have all of this working, but here's a problem. If the window is closed, all connections are dropped and the upload process for all queued files will be canceled. So, if Window One opens the Popup Window, adds stuff to the queue, refreshes the screen or moves to a different page, how will I access the Popup Window? The popup window and its contents must remain persistent while the user navigates through the original window. The original window must be able to access the popup to add a new job to the queue. The popup window itself is independent of the opening window, so communication only happens in one direction: Parent - Popup Not Parent <- Popup Window.open(null, 'WINDOW_NAME'); will not work in this case. I need to check if a window exists BEFORE using window.open. Help!?!?

    Read the article

  • Alternatives to the Entity Framework for Serving/Consuming an OData Interface

    - by Egahn
    I'm researching how to set up an OData interface to our database. I would like to be able to pull/query data from our DB into Excel, as a start. Eventually I would like to have Excel run queries and pull data over HTTP from a remote client, including authentication, etc. I've set up a working (rickety) prototype so far, using the ADO.NET Entity Data Model wizard in Visual Studio, and VSTO to create a test Excel worksheet with a button to pull from that ADO.NET interface. This works OK so far, and I can query the DB using Linq through the entities/objects that are created by the ADO.NET EDM wizard. However, I have started to run into some problems with this approach. I've been finding the Entity Framework difficult to work with (and in fact, also difficult to research solutions to, as there's a lot of chaff out there regarding it and older versions of it). An example of this is my being unable to figure out how to set the SQL command timeout (as opposed to the HTTP request timeout) on the DataServiceContext object that the wizard generates for my schema, but that's not the point of my question. The real question I have is, if I want to use OData as my interface standard, am I stuck with the Entity Framework? Are there any other solutions out there (preferably open source) which can set up, serve and consume an OData interface, and are easier to work with and less bloated than the Entity Framework? I have seen mention of NHibernate as an alternative, but most of the comparison threads I've seen are a few years old. Are there any other alternatives out there now? Thanks very much!

    Read the article

  • JQuery tablesorter - Second click on column header doesn't resort

    - by Jonathan
    I'm using tablesorter in on a table I added to a view in django's admin (although I'm not sure this is relevant). I'm extending the html's header: {% block extrahead %} <script type="text/javascript" src="http://code.jquery.com/jquery-1.4.2.js"></script> <script type="text/javascript" src="http://mysite.com/media/tablesorter/jquery.tablesorter.js"></script> <script type="text/javascript"> $(document).ready(function() { $("#myTable").tablesorter(); } ); </script> {% endblock %} When I click on a column header, it sorts the table using this column in descending order - that's ok. When I click the same column header a second time - it does not reorder to ascending order. What's wrong with it? the table's html looks like: <table id="myTable" border="1"> <thead> <tr> <th>column_name_1</th> <th>column_name_2</th> <th>column_name_3</th> </tr> </thead> <tbody> {% for item in extra.items %} <tr> <td>{{ item.0|safe }} </td> <td>{{ item.1|safe }} </td> <td>{{ item.2|safe }} </td> </tr> {% endfor %} </tbody> </table>

    Read the article

  • Good data structure for efficient insert/querying on arbitrary properties

    - by Juliet
    I'm working on a project where Arrays are the default data structure for everything, and every query is a linear search in the form of: Need a customer with a particular name? customer.Find(x => x.Name == name) Need a customer with a particular unique id? customer.Find(x => x.Id == id) Need a customer of a particular type and age? customer.Find(x => x is PreferredCustomer && x.Age >= age) Need a customer of a particular name and age? customer.Find(x => x.Name == name && x.Age == age) In almost all instances, the criteria for lookups is well-defined. For example, we only search for customers by one or more of the properties Id, Type, Name, or Age. We rarely search by anything else. Is a good data structure to support arbitrary queries of these types with lookup better than O(n)? Any out-of-the-box implementations for .NET?

    Read the article

  • Indexing and Searching Over Word Level Annotation Layers in Lucene

    - by dmcer
    I have a data set with multiple layers of annotation over the underlying text, such as part-of-tags, chunks from a shallow parser, name entities, and others from various natural language processing (NLP) tools. For a sentence like The man went to the store, the annotations might look like: Word POS Chunk NER ==== === ===== ======== The DT NP Person man NN NP Person went VBD VP - to TO PP - the DT NP Location store NN NP Location I'd like to index a bunch of documents with annotations like these using Lucene and then perform searches across the different layers. An example of a simple query would be to retrieve all documents where Washington is tagged as a person. While I'm not absolutely committed to the notation, syntactically end-users might enter the query as follows: Query: Word=Washington,NER=Person I'd also like to do more complex queries involving the sequential order of annotations across different layers, e.g. find all the documents where there's a word tagged person followed by the words arrived at followed by a word tagged location. Such a query might look like: Query: "NER=Person Word=arrived Word=at NER=Location" What's a good way to go about approaching this with Lucene? Is there anyway to index and search over document fields that contain structured tokens?

    Read the article

  • Transitioning from Domain Authentication to SQL Server Authentication

    - by Albert Perrien
    Greetings all, I've run into a problem that has me stumped. I've put together a database in SQL Server Express, and I'm having a strange permissions problem. The database is on my development machine with a domain user: DOMAIN\albertp. My development database server is set for "SQL Server and Windows Authentication" mode. I can edit and query my database without any problems when I log in using Windows Authentication. However, when I log in to any user that uses SQL Server authentication (Including sa) I get this message when I run queries against my database. SELECT * FROM [Testing].[dbo].[AuditingReport] I get: Msg 18456, Level 14, State 1, Line 1 Login failed for user 'auditor'. I'm logged into the server from SQL Server Management Studio as 'auditor' and I don't see anything in the error log about the login failure. I've already run: Use Testing; Grant All to auditor; Go And I still get the same error. What permissions do I have to set for the database to be usable by others outside of my personal domain login? Or am I looking at the wrong problem? My ultimate goal is to have the database be accessible from a set of PHP pages, using a either a common login (hence 'auditor') or a login specific to a set of individual users.

    Read the article

  • SQL Server problems reading columns with a foreign key

    - by illdev
    I have a weird situation, where simple queries seem to never finish for instance SELECT top 100 ArticleID FROM Article WHERE ProductGroupID=379114 returns immediately SELECT top 1000 ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT ArticleID FROM Article WHERE ProductGroupID=379114 never returns SELECT top 1000 ArticleID FROM Article returns immediately By 'returning' I mean 'in query analyzer the green check mark appears and it says "Query executed successfully"'. I sometimes get the rows painted to the grid in qa, but still the query goes on waiting for my client to time out - 'sometimes': SELECT ProductGroupID AS Product23_1_, ArticleID AS ArticleID1_, ArticleID AS ArticleID18_0_, Inventory_Name AS Inventory3_18_0_, Inventory_UnitOfMeasure AS Inventory4_18_0_, BusinessKey AS Business5_18_0_, Name AS Name18_0_, ServesPeople AS ServesPe7_18_0_, InStock AS InStock18_0_, Description AS Descript9_18_0_, Description2 AS Descrip10_18_0_, TechnicalData AS Technic11_18_0_, IsDiscontinued AS IsDisco12_18_0_, Release AS Release18_0_, Classifications AS Classif14_18_0_, DistributorName AS Distrib15_18_0_, DistributorProductCode AS Distrib16_18_0_, Options AS Options18_0_, IsPromoted AS IsPromoted18_0_, IsBulkyFreight AS IsBulky19_18_0_, IsBackOrderOnly AS IsBackO20_18_0_, Price AS Price18_0_, Weight AS Weight18_0_, ProductGroupID AS Product23_18_0_, ConversationID AS Convers24_18_0_, DistributorID AS Distrib25_18_0_, type AS Type18_0_ FROM Article AS articles0_ WHERE (IsDiscontinued = '0') AND (ProductGroupID = 379121) shows this behavior. I have no idea what is going on. Probably select is broken ;) I got a foreign key on ProductGroups ALTER TABLE [dbo].[Article] WITH CHECK ADD CONSTRAINT [FK_ProductGroup_Articles] FOREIGN KEY([ProductGroupID]) REFERENCES [dbo].[ProductGroup] ([ProductGroupID]) GO ALTER TABLE [dbo].[Article] CHECK CONSTRAINT [FK_ProductGroup_Articles] there are some 6000 rows and IsDiscontinued is a bit, not null, but leaving this condition out does not change the outcome. Anyone can tell me how to handle such a situation? More info, anyone? Additional Info: this does not seem to be restricted to this Foreign Key, but all/some referencing this entity.

    Read the article

  • Unsupported smapling rate in flex/actionscript

    - by Rajeev
    In action script i need Loading configuration file /opt/flex/frameworks/flex-config.xml t3.mxml(10): Error: unsupported sampling rate (24000Hz) [Embed(source="music.mp3")] t3.mxml(10): Error: Unable to transcode music.mp3. [Embed(source="music.mp3")] The code is <?xml version="1.0"?> <!-- embed/EmbedSound.mxml --> <mx:Application xmlns:mx="http://www.adobe.com/2006/mxml"> <mx:Script> <![CDATA[ import flash.media.*; [Embed(source="sample.mp3")] [Bindable] public var sndCls:Class; public var snd:Sound = new sndCls() as Sound; public var sndChannel:SoundChannel; public function playSound():void { sndChannel=snd.play(); } public function stopSound():void { sndChannel.stop(); } ]]> </mx:Script> <mx:HBox> <mx:Button label="play" click="playSound();"/> <mx:Button label="stop" click="stopSound();"/> </mx:HBox> </mx:Application>

    Read the article

  • Extract data from PostgreSQL DB without using pg_dump

    - by John Horton
    There is a PostgreSQL database on which I only have limited access (e.g, I can't use pg_dump). I am trying to create a local "mirror" by exporting certain tables from the database. I do not have the permissions needed to just dump a table as SQL from within psql. Right now, I just have a Python script that iterates through my table_names, selects all fields and then exports them as a CSV: for table_name, file_name in zip(table_names, file_names): cmd = """echo "\\\copy (select * from %s)" to stdout WITH CSV HEADER | psql -d remote_db | gzip > ./%s/%s.gz"""%(table_name,dir_name,file_name) os.system(cmd) I would like to not use CSV if possible, as I lose the field types and the encoding can get messed up. First best would probably be some way of getting the generating SQL code for the table using \copy. Next best would be XML, ideally with some way of preserving the field types. If that doesn't work, I think the final option might be two queries---one to get the field data types, the other to get the actual data. Any thoughts or advice would be greatly appreciated - thanks!

    Read the article

  • How to store unlimited characters in Oracle 11g?

    - by vicky21
    We have a table in Oracle 11g with a varchar2 column. We use a proprietary programming language where this column is defined as string. Maximum we can store 2000 characters (4000 bytes) in this column. Now the requirement is such that the column needs to store more than 2000 characters (in fact unlimited characters). The DBAs don't like BLOB or LONG datatypes for maintenance reasons. The solution that I can think of is to remove this column from the original table and have a separate table for this column and then store each character in a row, in order to get unlimited characters. This tble will be joined with the original table for queries. Is there any better solution to this problem? UPDATE: The proprietary programming language allows to define variables of type string and blob, there is no option of CLOB. I understand the responses given, but I cannot take on the DBAs. I understand that deviating from BLOB or LONG will be developers' nightmare, but still cannot help it.

    Read the article

  • EF/LINQ: Where() against a property of a subtype

    - by ladenedge
    I have a set of POCOs, all of which implement the following simple interface: interface IIdObject { int Id { get; set; } } A subset of these POCOs implement this additional interface: interface IDeletableObject : IIdObject { bool IsDeleted { get; set; } } I have a repository hierarchy that looks something like this: IRepository<T <: BasicRepository<T <: ValidatingRepository<T (where T is IIdObject) I'm trying to add a FilteringRepository to the hierarchy such that all of the POCOs that implement IDeletableObject have a Where(p => p.IsDeleted == false) filter applied before any other queries take place. My goal is to avoid duplicating the hierarchy solely for IDeletableObjects. My first attempt looked like this: public override IQueryable<T> Query() { return base.Query().Where(t => ((IDeletableObject)t).IsDeleted == false); } This works well with LINQ to Objects, but when I switch to an EF backend I get: "LINQ to Entities only supports casting Entity Data Model primitive types." I went on to try some fancier parameterized solutions, but they ultimately failed because I couldn't make T covariant in the following case for some reason I don't quite understand: interface IQueryFilter<out T> // error { Expression<Func<T, bool>> GetFilter(); } I'd be happy to go into more detail on my more complicated solutions if it would help, but I think I'll stop here for now in hope that someone might have an idea for me to try. Thanks very much in advance!

    Read the article

  • What are the advantages of a query using a derived table(s) over a query not using them?

    - by AspOnMyNet
    I know how derived tables are used, but I still can’t really see any real advantages of using them. For example, in the following article http://techahead.wordpress.com/2007/10/01/sql-derived-tables/ the author tried to show benefits of a query using derived table over a query without one with an example, where we want to generate a report that shows off the total number of orders each customer placed in 1996, and we want this result set to include all customers, including those that didn’t place any orders that year and those that have never placed any orders at all( he’s using Northwind database ). But when I compare the two queries, I fail to see any advantages of a query using a derived table ( if nothing else, use of a derived table doesn't appear to simplify our code, at least not in this example): Regular query: SELECT C.CustomerID, C.CompanyName, COUNT(O.OrderID) AS TotalOrders FROM Customers C LEFT OUTER JOIN Orders O ON C.CustomerID = O.CustomerID AND YEAR(O.OrderDate) = 1996 GROUP BY C.CustomerID, C.CompanyName Query using a derived table: SELECT C.CustomerID, C.CompanyName, COUNT(dOrders.OrderID) AS TotalOrders FROM Customers C LEFT OUTER JOIN (SELECT * FROM Orders WHERE YEAR(Orders.OrderDate) = 1996) AS dOrders ON C.CustomerID = dOrders.CustomerID GROUP BY C.CustomerID, C.CompanyName Perhaps this just wasn’t a good example, so could you show me an example where benefits of derived table are more obvious? thanx

    Read the article

  • using STI and ActiveRecordBase<> with full FindAll

    - by oillio
    Is it possible to use generic support with single table inheritance, and still be able to FindAll of the base class? As a bonus question, will I be able to use ActiveRecordLinqBase< as well? I do love those queries. More detail: Say I have the following classes defined: public interface ICompany { int ID { get; set; } string Name { get; set; } } [ActiveRecord("companies", DiscriminatorColumn="type", DiscriminatorType="String", DiscriminatorValue="NA")] public abstract class Company<T> : ActiveRecordBase<T>, ICompany { [PrimaryKey] private int Id { get; set; } [Property] public String Name { get; set; } } [ActiveRecord(DiscriminatorValue="firm")] public class Firm : Company<Firm> { [Property] public string Description { get; set; } } [ActiveRecord(DiscriminatorValue="client")] public class Client : Company<Client> { [Property] public int ChargeRate { get; set; } } This works fine for most cases. I can do things like: var x = Client.FindAll(); But sometimes I want all of the companies. If I was not using generics I could do: var x = (Company[]) FindAll(Company); Client a = (Client)x[0]; Firm b = (Firm)x[1]; Is there a way to write a FindAll that returns an array of ICompany's that can then be typecast into their respective types? Something like: var x = (ICompany[]) FindAll(Company<ICompany>); Client a = (Client)x[0]; Or maybe I am going about implementing the generic support all wrong?

    Read the article

  • how to make a div element edit-able ,like a textarea when i click..

    - by zjm1126
    this is my code: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <meta name="viewport" content="width=device-width, user-scalable=no"> </head> <body> <style type="text/css" media="screen"> </style> <!--<div id="map_canvas" style="width: 500px; height: 300px;background:blue;"></div>--> <div class=b style="width: 200px; height: 200px;background:pink;position:absolute;left:500px;top:100px;"></div> <script src="jquery-1.4.2.js" type="text/javascript"></script> <script src="jquery-ui-1.8rc3.custom.min.js" type="text/javascript"></script> <script type="text/javascript" charset="utf-8"> </script> </body> </html> thanks

    Read the article

  • No Buffer Space available(maximum connection reached?) Form Postgres EDB Driver

    - by Listening.Platform
    We are facing an exception while connecting to database through our java application. The stack trace is as follows com.edb.util.PSQLException: The connection attempt failed. at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:189) at com.edb.core.ConnectionFactory.openConnection(ConnectionFactory.java:64) at com.edb.jdbc2.AbstractJdbc2Connection.<init>(AbstractJdbc2Connection.java:161) at com.edb.jdbc3.AbstractJdbc3Connection.<init>(AbstractJdbc3Connection.java:30) at com.edb.jdbc3.Jdbc3Connection.<init>(Jdbc3Connection.java:24) at com.edb.Driver.makeConnection(Driver.java:391) at com.edb.Driver.connect(Driver.java:266) at java.sql.DriverManager.getConnection(Unknown Source) at java.sql.DriverManager.getConnection(Unknown Source) ... 12 more Caused by: java.net.SocketException: No buffer space available (maximum connections reached?): connect at java.net.PlainSocketImpl.socketConnect(Native Method) at java.net.PlainSocketImpl.doConnect(Unknown Source) at java.net.PlainSocketImpl.connectToAddress(Unknown Source) at java.net.PlainSocketImpl.connect(Unknown Source) at java.net.SocksSocketImpl.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.connect(Unknown Source) at java.net.Socket.<init>(Unknown Source) at java.net.Socket.<init>(Unknown Source) at com.edb.core.PGStream.<init>(PGStream.java:70) at com.edb.core.v3.ConnectionFactoryImpl.openConnectionImpl(ConnectionFactoryImpl.java:115) ... 20 more When the error occured we were not able to connect to internet and DB and had to reboot the system. But the error occured again after 3 days at same code i.e while connecting to DB. We checked TCP connections using netstat. But there were not many TCP connections i.e it has not reached the max limit. Our application has multiple long running Java processes that pools the DB connections (not more than 60) and keeps it alive for firing the next query (as it has to poll the DB every 2 seconds). Some of the queries in our application are joining large tables (10 million records) to get the related data. We are using following System and applications Windows 2003 server SP2 Java 1.6 Postgres Plus Advanced server 8.4 Database edb-jdbc14.jar driver for connection DB from Java We have used the default configuration of Postgres DB except increasing the connection to 120 from 100. Has anybody encountred the same error with postgres edb driver? Can anybody help us finding the solution?

    Read the article

  • Ways to update a dependent table in the same MySQL transaction?

    - by codie
    I need to update two tables inside a single transaction. The individual queries look something like this: 1. INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; If the above query causes an insert then I need to run the following statement on the second table: 2. INSERT INTO t2 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = col2 + val2; otherwise, 3. UPDATE t2 SET col2 = col2 - old_val2 + val2 WHERE col1 = val1; -- old_val2 is the value of t1.col2 before it was updated Right now I run a SELECT on t1 first, to determine whether statement 1 will cause an insert or update on t1. Then I run statement 1 and either of 2 and 3 inside a transaction. What are the ways in which I can do all of these inside one transaction itself? The approach I was thinking of is the following: UPDATE t2, t1 set t2.col2 = t2.col2 - t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; INSERT INTO t1 (col1, col2) VALUES (val1, val2) ON DUPLICATE KEY UPDATE col2 = val2; INSERT INTO t2, t1 (t2.col1, t2.col2) VALUES (t1.col1, t1.col2) ON DUPLICATE KEY UPDATE t2.col2 = t2.col2 + t1.col2 WHERE t1.col1 = t2.col2 and t1.col1 = val1; Unfortunately, there's no multi-table INSERT... ON DUPLICATE KEY UPDATE in MySQL 5.0. What else could I do?

    Read the article

< Previous Page | 365 366 367 368 369 370 371 372 373 374 375 376  | Next Page >