Search Results

Search found 8161 results on 327 pages for 'django queries'.

Page 312/327 | < Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >

  • Dynamically select field names in a query with Spring JDBCTemplate

    - by Francesco
    Hi, I have a problem with parameters replacing by Spring JdbcTemplate. I have this query : <bean id="fixQuery" class="java.lang.String"> <constructor-arg type="java.lang.String" value="select fa.id, fi.? from fix_ambulation fa left join fix_i18n fi on fa.translation_id = fi.id order by name" /> And this method : public List<FixAmbulation> readFixAmbulation(String locale) throws Exception { List<FixAmbulation> ambulations = this.getJdbcTemplate().query( fixQuery, new Object[] {locale.toLowerCase()}, ParameterizedBeanPropertyRowMapper .newInstance(FixAmbulation.class)); return ambulations; } And I'd like to have the ? filled with the string representing the locale the user is using. So if the user is brasilian I'd send him the column pt_br from the table fix_i18n, otherwise if he's american I'd send him the column en_us. What I get from this method is a PostgreSQL exception org.postgresql.util.PSQLException: ERROR: syntax error at or near "$1" If I replace fi.? with just ? (the column name of the locale is unique, so if I run this query in the database it works just fine) what I get is that every object returned from method has the string locale into the field name. I.e. in name field I have "en_us". The only way to have it working I found was to change the method into : public List<FixAmbulation> readFixAmbulation(String locale) throws Exception { String query = "select fa.id, fi." + locale.toLowerCase() + " as name " + fixQuery; this.log.info("QUERY : " + query); List<FixAmbulation> ambulations = this.getJdbcTemplate().query( query, ParameterizedBeanPropertyRowMapper .newInstance(FixAmbulation.class)); return ambulations; } and setting fixQuery to : <bean id="fixQuery" class="java.lang.String"> <constructor-arg type="java.lang.String" value=" from telemedicina.fix_ambulation fa left join telemedicina.fix_i18n fi on fa.translation_id = fi.id order by name" /> </bean> My DAO extends Spring JdbcDaoSupport and works just fine for all other queries. What am I doing wrong?

    Read the article

  • Summer Programming Plans

    - by Gabe
    I've wanted to start "hacking" for many months now. But I put it off in favor of school and other things. Now, though, I'm free for the summer and want to learn as much as I can. I have a rough idea of what I want to try my hand at, but need some guidance as to what specifically - and how - I should learn. This is my plan so far: 1) Get good at programming in general. I plan to read up on how to think/work like a programmer. I'm waiting for the Pragmatic Programmer to arrive, which will be the first book I read. Q: What other books/ebooks should I look at? What more can I do here? 2) Learn/Improve at HTML/CSS. My first project will be to make a personal website/blog for myself using HTML and CSS. ----Then I hope to write/design articles like Dustin Curtis. After I finish this (and learn a programming language) I'll try to create user-based a user-focused website. Q: It's my understanding that just trying to design/manage websites is a good way to learn/improve at HTML/CSS. Is that all correct? 3) Try music development. This might be a sort of stretch for stackoverflow, but I'm interested in mixing/making techno songs. (Think Justice, or Daft Punk, or MSTRKRFT.) Q: I have a Mac. Any ideas on how I could start/learn music making? Any programs I should download, for instance? 4) My main goal: Learning a web development language/framework. I'm a year into learning/using C++. But what I really want to do is develop websites and web apps. I've searched online, and there seems to be great debate over which language/framework to learn first (and which is best). I think I've narrowed it down to three: Ruby (Rails), Python (Django), and PHP (?). Q #1: Which should I learn and use first? (Reasons?) Q #2: One reason I was leaning towards PHP is that I'm taking a PHP development course next semester. Learning it now would make that course easy. If PHP was not the answer to Q #1, is it worth learning both? Or, would it be better to just focus on PHP for this summer and next semester, and then transition thereafter to a better language? 5) iPhone/iPad Programming (Maybe). I've a number of simple, useful app ideas that I'd like to eventually get too. I just bought a Mac, as well as a few app development books. Q #1: Am I spreading myself thin trying to learn all of the above, and objective-C? Q #2: How much harder/easier is objective-C compared to the above languages? Also, how easy is it to learn obj-C after learning a web development language (and some C++)? Q #3: Yes or no? Should I go for it, or just keeep with #1-4 for now? Also: If you have any tips on how I should learn (or how you learned to hack), I'm all ears. I'd be especially interested in how you planned out learning: did you just hack whenever you felt like it, or did you "study" the language a few hours a day, or something else? Thanks so much, guys.

    Read the article

  • How does mysql define DISTINCT() in reference documentation

    - by goran
    EDIT: This question is about finding definitive reference to MySQL syntax on SELECT modifying keywords and functions. /EDIT AFAIK SQL defines two uses of DISTINCT keywords - SELECT DISTINCT field... and SELECT COUNT(DISTINCT field) ... However in one of web applications that I administer I've noticed performance issues on queries like SELECT DISTINCT(field1), field2, field3 ... DISTINCT() on a single column makes no sense and I am almost sure it is interpreted as SELECT DISTINCT field1, field2, field3 ... but how can I prove this? I've searched mysql site for a reference on this particular syntax, but could not find any. Does anyone have a link to definition of DISTINCT() in mysql or knows about other authoritative source on this? Best EDIT After asking the same question on mysql forums I learned that while parsing the SQL mysql does not care about whitespace between functions and column names (but I am still missing a reference). As it seems you can have whitespace between functions and the parenthesis SELECT LEFT (field1,1), field2... and get mysql to understand it as SELECT LEFT(field,1) Similarly SELECT DISTINCT(field1), field2... seems to get decomposed to SELECT DISTINCT (field1), field2... and then DISTINCT is taken not as some undefined (or undocumented) function, but as SELECT modifying keyword and the parenthesis around field1 are evaluated as if they were part of field expression. It would be great if someone would have a pointer to documentation where it is stated that the whitespace between functions and parenthesis is not significant or to provide links to apropriate MySQL forums, mailing lists where I could raise a question to put this into reference. EDIT I have found a reference to server option IGNORE SPACE. It states that "The IGNORE SPACE SQL mode can be used to modify how the parser treats function names that are whitespace-sensitive", later on it states that recent versions of mysql have reduced this number from 200 to 30. One of the remaining 30 is COUNT for example. With IGNORE SPACE enabled both SELECT COUNT(*) FROM mytable; SELECT COUNT (*) FROM mytable; are legal. So if this is an exception, I am left to conclude that normally functions ignore space by default. If functions ignore space by default then if the context is ambiguous, such as for the first function on a first item of the select expression, then they are not distinguishable from keywords and the error can not be thrown and MySQL must accept them as keywords. Still, my conclusions feel like they have lot of assumptions, I would still be grateful and accept any pointers to see where to follow up on this.

    Read the article

  • Tactics for using PHP in a high-load site

    - by Ross
    Before you answer this I have never developed anything popular enough to attain high server loads. Treat me as (sigh) an alien that has just landed on the planet, albeit one that knows PHP and a few optimisation techniques. I'm developing a tool in PHP that could attain quite a lot of users, if it works out right. However while I'm fully capable of developing the program I'm pretty much clueless when it comes to making something that can deal with huge traffic. So here's a few questions on it (feel free to turn this question into a resource thread as well). Databases At the moment I plan to use the MySQLi features in PHP5. However how should I setup the databases in relation to users and content? Do I actually need multiple databases? At the moment everything's jumbled into one database - although I've been considering spreading user data to one, actual content to another and finally core site content (template masters etc.) to another. My reasoning behind this is that sending queries to different databases will ease up the load on them as one database = 3 load sources. Also would this still be effective if they were all on the same server? Caching I have a template system that is used to build the pages and swap out variables. Master templates are stored in the database and each time a template is called it's cached copy (a html document) is called. At the moment I have two types of variable in these templates - a static var and a dynamic var. Static vars are usually things like page names, the name of the site - things that don't change often; dynamic vars are things that change on each page load. My question on this: Say I have comments on different articles. Which is a better solution: store the simple comment template and render comments (from a DB call) each time the page is loaded or store a cached copy of the comments page as a html page - each time a comment is added/edited/deleted the page is recached. Finally Does anyone have any tips/pointers for running a high load site on PHP. I'm pretty sure it's a workable language to use - Facebook and Yahoo! give it great precedence - but are there any experiences I should watch out for? Thanks, Ross

    Read the article

  • While loops within while loops and output php?

    - by NovacTownCode
    I have a while loop to show the replies for a post on my website. The value for parentID used in the query is $post['postID'] which is an array of details for the post being viewed. As seen below it outputs the following (each subject is a link to view the full post) $q = $dbc -> prepare("SELECT * FROM boardposts WHERE parentID = ?"); $q -> execute(array($post['postID'])); while ($postReply = $q -> fetch(PDO::FETCH_ASSOC)) { echo '<p><a href="http://www.example.com/boards?topic=' . $_GET['topic'] . '&amp;view=' . $postReply['postID'] . '">' . $postReply['subject'] . '</a>'; } This currently outputs something along the lines of, Replies To This Message: subject 1 subject 2 subject 3 subject 4 Is there a way in which I can also in the list include replies to the replies, something along the lines of, Replies To This Message: subject 1          subject 1 reply          subject 1 reply                  subject 1 reply reply subject 2 subject 3          subject 3 reply          subject 3 reply                  subject 3 reply reply subject 4          subject 4 reply subject 5 subject 6          subject 6 reply                  subject 4 reply reply I understand all the indenting can be with css, but am stuck as to how to pull the data from the mysql database and in the correct order, I tried while loops within while loops, but that involved queries inside while loops, which is bad! Thanks for your input!

    Read the article

  • How to use PredicateBuilder with nested OR conditionals in Linq

    - by tblank
    I've been very happily using PredicateBuilder but until now have only used it for queries with only either concatenated AND statements or OR statements. Now for the first time I need a pair of OR statements nested along with a some AND statements like this: select x from Table1 where a = 1 AND b = 2 AND (z = 1 OR y = 2) Using the documentation from Albahari, I've constructed my expression like this: Expression<Func<TdIncSearchVw, bool>> predicate = PredicateBuilder.True<TdIncSearchVw>(); // for AND Expression<Func<TdIncSearchVw, bool>> innerOrPredicate = PredicateBuilder.False<TdIncSearchVw>(); // for OR innerOrPredicate = innerOrPredicate.Or(i=> i.IncStatusInd.Equals(incStatus)); innerOrPredicate = innerOrPredicate.Or(i=> i.RqmtStatusInd.Equals(incStatus)); predicate = predicate.And(i => i.TmTec.Equals(tecTm)); predicate = predicate.And(i => i.TmsTec.Equals(series)); predicate = predicate.And(i => i.HistoryInd.Equals(historyInd)); predicate.And(innerOrPredicate); var query = repo.GetEnumerable(predicate); This results in SQL that completely ignores the 2 OR phrases. select x from TdIncSearchVw where ((this_."TM_TEC" = :p0 and this_."TMS_TEC" = :p1) and this_."HISTORY_IND" = :p2) If I try using just the OR phrases like: Expression<Func<TdIncSearchVw, bool>> innerOrPredicate = PredicateBuilder.False<TdIncSearchVw>(); // for OR innerOrPredicate = innerOrPredicate.Or(i=> i.IncStatusInd.Equals(incStatus)); innerOrPredicate = innerOrPredicate.Or(i=> i.RqmtStatusInd.Equals(incStatus)); var query = repo.GetEnumerable(innerOrPredicate); I get SQL as expected like: select X from TdIncSearchVw where (IncStatusInd = incStatus OR RqmtStatusInd = incStatus) If I try using just the AND phrases like: predicate = predicate.And(i => i.TmTec.Equals(tecTm)); predicate = predicate.And(i => i.TmsTec.Equals(series)); predicate = predicate.And(i => i.HistoryInd.Equals(historyInd)); var query = repo.GetEnumerable(predicate); I get SQL like: select x from TdIncSearchVw where ((this_."TM_TEC" = :p0 and this_."TMS_TEC" = :p1) and this_."HISTORY_IND" = :p2) which is exactly the same as the first query. It seems like I'm so close it must be something simple that I'm missing. Can anyone see what I'm doing wrong here? Thanks, Terry

    Read the article

  • How to optimize Core Data query for full text search

    - by dk
    Can I optimize a Core Data query when searching for matching words in a text? (This question also pertains to the wisdom of custom SQL versus Core Data on an iPhone.) I'm working on a new (iPhone) app that is a handheld reference tool for a scientific database. The main interface is a standard searchable table view and I want as-you-type response as the user types new words. Words matches must be prefixes of words in the text. The text is composed of 100,000s of words. In my prototype I coded SQL directly. I created a separate "words" table containing every word in the text fields of the main entity. I indexed words and performed searches along the lines of SELECT id, * FROM textTable JOIN (SELECT DISTINCT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) ON id=textTableId LIMIT 50 This runs very fast. Using an IN would probably work just as well, i.e. SELECT * FROM textTable WHERE id IN (SELECT textTableId FROM words WHERE word BETWEEN 'foo' AND 'fooz' ) LIMIT 50 The LIMIT is crucial and allows me to display results quickly. I notify the user that there are too many to display if the limit is reached. This is kludgy. I've spent the last several days pondering the advantages of moving to Core Data, but I worry about the lack of control in the schema, indexing, and querying for an important query. Theoretically an NSPredicate of textField MATCHES '.*\bfoo.*' would just work, but I'm sure it will be slow. This sort of text search seems so common that I wonder what is the usual attack? Would you create a words entity as I did above and use a predicate of "word BEGINSWITH 'foo'"? Will that work as fast as my prototype? Will Core Data automatically create the right indexes? I can't find any explicit means of advising the persistent store about indexes. I see some nice advantages of Core Data in my iPhone app. The faulting and other memory considerations allow for efficient database retrievals for tableview queries without setting arbitrary limits. The object graph management allows me to easily traverse entities without writing lots of SQL. Migration features will be nice in the future. On the other hand, in a limited resource environment (iPhone) I worry that an automatically generated database will be bloated with metadata, unnecessary inverse relationships, inefficient attribute datatypes, etc. Should I dive in or proceed with caution?

    Read the article

  • Linq-to-sql Compiled Query returning object NOT belonging to submitted DataContext

    - by Vladimir Kojic
    Compiled query: public static class Machines { public static readonly Func<OperationalDataContext, short, Machine> QueryMachineById = CompiledQuery.Compile((OperationalDataContext db, short machineID) => db.Machines.Where(m => m.MachineID == machineID).SingleOrDefault() ); public static Machine GetMachineById(IUnitOfWork unitOfWork, short id) { Machine machine; // Old code (working) //var machineRepository = unitOfWork.GetRepository<Machine>(); //machine = machineRepository.Find(m => m.MachineID == id).SingleOrDefault(); // New code (making problems) machine = QueryMachineById(unitOfWork.DataContext, id); return machine; } It looks like compiled query is caching Machine object and returning the same object even if query is called from new DataContext (I’m disposing DataContext in the service but I’m getting Machine from previous DataContext). I use POCOs and XML mapping. Revised: It looks like compiled query is returning result from new data context and it is not using the one that I passed in compiled-query. Therefore I can not reuse returned object and link it to another object obtained from datacontext thru non compiled queries. [TestMethod] public void GetMachinesTest() { // Test Preparation (not important) using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // GET ALL List<Machine> list = machineRepository.FindAll().ToList<Machine>(); VerifyIntegratedMachine(list[2], 3, "Machine 3", "333333", "G300PET", "MachineIconC.xaml", false, true, LicenseType.Licensed, "10.0.97.3", "10.0.97.3", 0); var machine = Machines.GetMachineById(unitOfWork, 3); Assert.AreSame(list[2], machine); // PASS !!!! } using (var unitOfWork = IoC.Get<IUnitOfWork>()) { var machineRepository = unitOfWork.GetRepository<Machine>(); // GET ALL List<Machine> list = machineRepository.FindAll().ToList<Machine>(); VerifyIntegratedMachine(list[2], 3, "Machine 3", "333333", "G300PET", "MachineIconC.xaml", false, true, LicenseType.Licensed, "10.0.97.3", "10.0.97.3", 0); var machine = Machines.GetMachineById(unitOfWork, 3); Assert.AreSame(list[2], machine); // FAIL !!!! } } If I run other (complex) unit tests I'm getting as expected: An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext.

    Read the article

  • Telnet connection using c#

    - by alejandrobog
    Our office currently uses telnet to query an external server. The procedure is something like this. Connect - telnet opent 128........ 25000 Query - we paste the query and then hit alt + 019 Response - We receive the response as text in the telnet window So I’m trying to make this queries automatic using a c# app. My code is the following First the connection. (No exceptions) SocketClient = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp); String szIPSelected = txtIPAddress.Text; String szPort = txtPort.Text; int alPort = System.Convert.ToInt16(szPort, 10); System.Net.IPAddress remoteIPAddress = System.Net.IPAddress.Parse(szIPSelected); System.Net.IPEndPoint remoteEndPoint = new System.Net.IPEndPoint(remoteIPAddress, alPort); SocketClient.Connect(remoteEndPoint); Then I send the query (No exceptions) string data ="some query"; byte[] byData = System.Text.Encoding.ASCII.GetBytes(data); SocketClient.Send(byData); Then I try to receive the response byte[] buffer = new byte[10]; Receive(SocketClient, buffer, 0, buffer.Length, 10000); string str = Encoding.ASCII.GetString(buffer, 0, buffer.Length); txtDataRx.Text = str; public static void Receive(Socket socket, byte[] buffer, int offset, int size, int timeout) { int startTickCount = Environment.TickCount; int received = 0; // how many bytes is already received do { if (Environment.TickCount > startTickCount + timeout) throw new Exception("Timeout."); try { received += socket.Receive(buffer, offset + received, size - received, SocketFlags.None); } catch (SocketException ex) { if (ex.SocketErrorCode == SocketError.WouldBlock || ex.SocketErrorCode == SocketError.IOPending || ex.SocketErrorCode == SocketError.NoBufferSpaceAvailable) { // socket buffer is probably empty, wait and try again Thread.Sleep(30); } else throw ex; // any serious error occurr } } while (received < size); } Every time I try to receive the response I get "an exsiting connetion has forcibly closed by the remote host" if open telnet and send the same query I get a response right away Any ideas, or suggestions?

    Read the article

  • Loading XML from Web Service

    - by Lukasz
    I am connecting to a web service to get some data back out as xml. The connection works fine and it returns the xml data from the service. var remoteURL = EveApiUrl; var postData = string.Format("userID={0}&apikey={1}&characterID={2}", UserId, ApiKey, CharacterId); var request = (HttpWebRequest)WebRequest.Create(remoteURL); request.Method = "POST"; request.ContentLength = postData.Length; request.ContentType = "application/x-www-form-urlencoded"; // Setup a stream to write the HTTP "POST" data var WebEncoding = new ASCIIEncoding(); var byte1 = WebEncoding.GetBytes(postData); var newStream = request.GetRequestStream(); newStream.Write(byte1, 0, byte1.Length); newStream.Close(); var response = (HttpWebResponse)request.GetResponse(); var receiveStream = response.GetResponseStream(); var readStream = new StreamReader(receiveStream, Encoding.UTF8); var webdata = readStream.ReadToEnd(); Console.WriteLine(webdata); This prints out the xml that comes from the service. I can also save the xml as an xml file like so; TextWriter writer = new StreamWriter(@"C:\Projects\TrainingSkills.xml"); writer.WriteLine(webdata); writer.Close(); Now I can load the file as an XDocument to perform queries on it like this; var data = XDocument.Load(@"C:\Projects\TrainingSkills.xml"); What my problem is that I don't want to save the file and then load it back again. When I try to load directly from the stream I get an exception, Illegal characters in path. I don't know what is going on, if I can load the same xml as a text file why can't I load it as a stream. The xml is like this; <?xml version='1.0' encoding='UTF-8'?> <eveapi version="2"> <currentTime>2010-04-28 17:58:27</currentTime> <result> <currentTQTime offset="1">2010-04-28 17:58:28</currentTQTime> <trainingEndTime>2010-04-29 02:48:59</trainingEndTime> <trainingStartTime>2010-04-28 00:56:42</trainingStartTime> <trainingTypeID>3386</trainingTypeID> <trainingStartSP>8000</trainingStartSP> <trainingDestinationSP>45255</trainingDestinationSP> <trainingToLevel>4</trainingToLevel> <skillInTraining>1</skillInTraining> </result> <cachedUntil>2010-04-28 18:58:27</cachedUntil> </eveapi> Thanks for your help!

    Read the article

  • How to retrieve multiple anchors URLs with jQuery ?

    - by pierre-guillaume-degans
    Hello, I would like to create a javascript playlist with Jplayer. This is a nice and easy tool, however I never coded with javascript. Look at the javascript used in this demo. It uses a list to store MP3 and Ogg files : var myPlayList = [ {name:"Tempered Song",mp3:"http://www.miaowmusic.com/mp3/Miaow-01-Tempered-song.mp3",ogg:"http://www.miaowmusic.com/ogg/Miaow-01-Tempered-song.ogg"}, {name:"Hidden",mp3:"http://www.miaowmusic.com/mp3/Miaow-02-Hidden.mp3",ogg:"http://www.miaowmusic.com/ogg/Miaow-02-Hidden.ogg"}, {name:"Lentement",mp3:"http://www.miaowmusic.com/mp3/Miaow-03-Lentement.mp3",ogg:"http://www.miaowmusic.com/ogg/Miaow-03-Lentement.ogg"}, {name:"Lismore",mp3:"http://www.miaowmusic.com/mp3/Miaow-04-Lismore.mp3",ogg:"http://www.miaowmusic.com/ogg/Miaow-04-Lismore.ogg"}, {name:"The Separation",mp3:"http://www.miaowmusic.com/mp3/Miaow-05-The-separation.mp3",ogg:"http://www.miaowmusic.com/ogg/Miaow-05-The-separation.ogg"}, {name:"Beside Me",mp3:"http://www.miaowmusic.com/mp3/Miaow-06-Beside-me.mp3",ogg:"http://www.miaowmusic.com/ogg/Miaow-06-Beside-me.ogg"}, ]; So for now, I just use a django template (but it could be another template engine) to create this variable. However I would like to create this list (myPlayList) dynamically with a javascript function which would retrieve the MP3 urls and the Ogg vorbis URLs from the HTML code. Thus, from this HTML code...: <body> <article id="track-0"> <h1>lorem ipsum</h1> <ul> <li><a href="...">Mp3</a></li> <li><a href="...">Vorbis</a></li> <li><a href="...">Flac</a></li> </ul> </article> <article id="track-1"> <h1>lorem ipsum</h1> <ul> <li><a href="...">Mp3</a></li> <li><a href="...">Vorbis</a></li> <li><a href="...">Flac</a></li> </ul> </article> <article id="track-2"> <h1>lorem ipsum</h1> <ul> <li><a href="...">Mp3</a></li> <li><a href="...">Vorbis</a></li> <li><a href="...">Flac</a></li> </ul> </article> </body> ... I need to build a javascript list like this (where each index of the list represents the track-ID in the HTML: var files = [ {mp3:"...", ogg:"..."}, {mp3:"...", ogg:"..."}, {mp3:"...", ogg:"..."}, ]; Please excuse me for my ugly english. If you need more informations just tell me. Thank you. :-)

    Read the article

  • iPhone SDK / Core Data usage scenario, similar to GAE data store?

    - by boliva
    Hi all, I am currently rewriting a map based App which I wrote in the past, specifically for 2.2.1 devices. Originally I wrote it to make use of SQLite databases but I would like to try and migrate it over Core Data, now that it's available on 3.X (for which I am rewriting to). I am fairly experienced in iPhone/Obj-C development, SQL and server backend technologies, but I have never had the chance to work with Core Data so IDK really if it's the appropiate tool for what I am trying to accomplish. The App works on a limited area in a map over which there are about 4000 placemarks, with different kinds of icons and sizes. Of course not all 4000 placemarks are shown at once but only those currently visible in the map viewport, and depending on the zoom level. What I am doing right now is, after the user moves the map in any way (panning or zooming) I am requesting from the backend server the required information for the placemarks that would be visible given the viewport coordinates boundaries and zoom level, however the process isn't as smooth as I'd like (the backend is sending its response in XML and I am compressing it using gzip), it takes anywhere from 1 to 3 seconds to update the display of the placemarks after the user ends moving the map. What I would like to do is to prefetch all the placemarks data at the App launch and use it all through the app life time - I don't mind storing it for later use because the data should be dynamic. The way I would do it right now is, after retrieving all the data, to store it on an SQLite db which I would query later, whenever the user moves the map, to return only the placemarks inside the viewport coordinate boundaries and specific to a given zoom level. Now, the question itself is, if is it possible to use some more 'native', object driven way to carry this queries process, which got me thinking about Core Data and if it is in any way similar to what Google App Engine offers through its datastore where you can fetch a number of objects from the backend given a certain query or criteria, without resorting to an SQL query itself. Like I said before I don't have any experience on Core Data but I have a pretty deep understanding of Obj-C and iPhone development, as well as SQL databases. Any guides on how to achieve what I'm trying (if possible at all) would be greatly appreciated.

    Read the article

  • System.Net.Dns.GetHostAddresses("")

    - by dbasnett
    Yesterday s**ked, and today ain't (sic) looking better. I have an application I have been working on and it can be slow to start when my ISP is down because of DNS. My ISP was down for 3 hours yesterday, so I didn't think much about this piece of code I had added, until I found that it is always slow to start. This code is supposed to return your IP address and my reading of the link suggests that should be immediate, but it isn't, at least on my machine. Oh, and yesterday before the internet went down, I upgraded (oymoron) to XP SP3, and have had other problems. So my questions / request: 1. Am I doing this right? 2. If you run this on your machine does it take 39 seconds to return your IP address? It does on mine. One other note, I did a packet capture and the first request did NOT go on the wire, but the second did, and was answered quickly. So the question is what happened in XP SP3 that I am missing, besides a brain. One last note. If I resolve a FQDN all is well. Public Class Form1 'http://msdn.microsoft.com/en-us/library/system.net.dns.gethostaddresses.aspx ' 'excerpt 'The GetHostAddresses method queries a DNS server 'for the IP addresses associated with a host name. ' 'If hostNameOrAddress is an IP address, this address 'is returned without querying the DNS server. ' 'When an empty string is passed as the host name, 'this method returns the IPv4 addresses of the local host Private Sub Button1_Click(ByVal sender As System.Object, _ ByVal e As System.EventArgs) Handles Button1.Click Dim stpw As New Stopwatch stpw.Reset() stpw.Start() 'originally Dns.GetHostEntry, but slow also Dim myIPs() As System.Net.IPAddress = System.Net.Dns.GetHostAddresses("") stpw.Stop() Debug.WriteLine("'" & stpw.Elapsed.TotalSeconds) If myIPs.Length > 0 Then Debug.WriteLine("'" & myIPs(0).ToString) 'debug '39.8990525 '192.168.1.2 stpw.Reset() stpw.Start() 'originally Dns.GetHostEntry, but slow also myIPs = System.Net.Dns.GetHostAddresses("www.vbforums.com") stpw.Stop() Debug.WriteLine("'" & stpw.Elapsed.TotalSeconds) If myIPs.Length > 0 Then Debug.WriteLine("'" & myIPs(0).ToString) 'debug '0.042212 '63.236.73.220 End Sub End Class

    Read the article

  • Sysadmin 101: How can I figure out why my server crashes and monitor performance?

    - by bflora
    I have a Drupal-powered site that seems to have neverending performance problems. It was butt-slow about 5 months ago. I brought in some guys who installed nginx for anonymous visitors, ajaxified a few queries so they wouldn't fire during page load, and helped me find a few bottlenecks in the code. For about a month, the site was significantly faster, though not "fast" by any stretch of the word. Meanwhile, I'm now shelling out $400/month to Slicehost to host a site that gets less than 5,000/uniques a day. Yes, you read that right. Go Drupal. Recently the site started crashing again and is slow again. I can't afford to hire people to come in, study my code from top to bottom, and make changes that may or may not help anymore. And I can't afford to throw more hardware at the problem. So I need to figure out what the problem is myself. Questions: When apache crashes, is it possible to find out what caused it to crash? There has to be a way, right? If so, how can I do this? Is there software I can use that will tell me which process caused my server to die? (e.g. "Apache crashed because someone visited page X." or "Apache crashed because you were importing too many RSS items from feed X.") There's got to be a way to learn this, right? What's a good, noob-friendly way to monitor my current apache performance? My developer friends tell me to "just use Top, dude," but Top shows me a bunch of numbers without any context. I have no clue what qualifies as a bad number or a good number in Top, or which processes are relevant and which aren't. Are there any noob-friendly server monitoring tools out there? Ideally, I could have a page that would give me a color-coded indicator about how apache is performing and then show me a list of processes or pages that are sucking right now. This way, I could know when performance is bad and then what's causing it to be so bad. Why does PHP memory matter? My apparently has a 30MB memory foot print. Will it run faster if I bring that number down? Thanks for any advice. I spent a year or so trying to boost my advertising income so I could hire a contractor to solve my performance woes. I didn't want to have to learn all this sysadmin voodoo. I'm now resigned to the fact that might not have a choice.

    Read the article

  • NHibernate criteria query question

    - by Chris
    I have 3 related objects (Entry, GamePlay, Prize) and I'm trying to find the best way to query them for what I need using NHibernate. When a request comes in, I need to query the Entries table for a matching entry and, if found, get a) the latest game play along with the first game play that has a prize attached. Prize is a child of GamePlay and each Entry object has a GamePlays property (IList). Currently, I'm working on a method that pulls the matching Entry and eagerly loads all game plays and associated prizes, but it seems wasteful to load all game plays just to find the latest one and any that contain a prize. Right now, my query looks like this: var entry = session.CreateCriteria<Entry>() .Add(Restrictions.Eq("Phone", phone)) .AddOrder(Order.Desc("Created")) .SetFetchMode("GamePlays", FetchMode.Join) .SetMaxResults(1).UniqueResult<Entry>(); Two problems with this: It loads all game plays up front. With 365 days of data, this could easily balloon to 300k of data per query. It doesn't eagerly load the Prize child property for each game. Therefore, my code that loops through the GamePlays list looking for a non-null Prize must make a call to load each Prize property I check. I'm not an nhibernate expert, but I know there has to be a better way to do this. Ideally, I'd like to do the following (pseudocode): entry = findEntry(phoneNumber) lastPlay = getLatestGamePlay(Entry) firstWinningPlay = getFirstWinningGamePlay(Entry) The end result of course is that I have the entry details, the latest game play, and the first winning game play. The catch is that I want to do this in as few database calls as possible, otherwise I'd just execute 3 separate queries. The object definitions look like: public class Entry { public Guid Id {get;set;} public string Phone {get;set;} public IList<GamePlay> GamePlays {get;set;} // ... other properties } public class GamePlay { public Guid Id {get;set;} public Entry Entry {get;set;} public Prize Prize {get;set;} // ... other properties } public class Prize { public Guid Id {get;set;} // ... other properties } The proper NHibernate mappings are in place, so I just need help figuring out how to set up the criteria query (not looking for HQL, don't use it).

    Read the article

  • iPhone: Speeding up a search that's polling 17,000 Core Data objects

    - by randombits
    I have a class that conforms to UISearchDisplayDelegate and contains a UISearchBar. This view is responsible for allowing the user to poll a store of about 17,000 objects that are currently managed by Core Data. Everytime the user types in a character, I created an instance of a SearchOperation (subclasses NSOperation) that queries Core Data to find results that might match the search. The code in the search controller looks something like: - (void)filterContentForSearchText:(NSString*)searchText scope:(NSString*)scope { // Update the filtered array based on the search text and scope in a secondary thread if ([searchText length] < 3) { [filteredList removeAllObjects]; // First clear the filtered array. [self setFilteredList:NULL]; [self.tableView reloadData]; return; } NSDictionary *searchdict = [NSDictionary dictionaryWithObjectsAndKeys:scope, @"scope", searchText, @"searchText", nil]; [aSearchQueue cancelAllOperations]; SearchOperation *searchOp = [[SearchOperation alloc] initWithDelegate:self dataDict:searchdict]; [aSearchQueue addOperation:searchOp]; } And my search is rather straight forward. SearchOperation is a subclass of NSOperation. I overwrote the main method with the following code: - (void)main { if ([self isCancelled]) { return; } NSAutoreleasePool *pool = [[NSAutoreleasePool alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"MyEntity" inManagedObjectContext:managedObjectContext]; NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; [fetchRequest setEntity:entity]; NSPredicate *predicate = NULL; predicate = [NSPredicate predicateWithFormat:@"(someattr contains[cd] %@)", searchText]; [fetchRequest setPredicate:predicate]; NSError *error = NULL; NSArray *fetchResults = [managedObjectContext executeFetchRequest:fetchRequest error:&error]; [fetchRequest release]; if (self.delegate != nil) [self.delegate didFinishSearching:fetchResults]; [pool drain]; } This code works, but it has several issues. It's slow. Even though I have the search happening in a separate thread other than the UI thread, querying 17,000 objects is clearly not optimal. If I'm not careful, crashes can happen. I set the max concurrent searches in my NSOperationQueue to 1 to avoid this. What else can I do to make this search faster? I think preloading all 17,000 objects into memory might be risky. There has to be a smarter way to conduct this search to give results back to the user faster.

    Read the article

  • Dynamic Linq help, different errors depending on object passed as parameter?

    - by sah302
    I have an entityDao that is inherbited by everyone of my objectDaos. I am using Dynamic Linq and trying to get some generic queries to work. I have the following code in my generic method in my EntityDao : public abstract class EntityDao<ImplementationType> where ImplementationType : Entity { public ImplementationType getOneByValueOfProperty(string getProperty, object getValue){ ImplementationType entity = null; if (getProperty != null && getValue != null) { LCFDataContext lcfdatacontext = new LCFDataContext(); //Generic LINQ Query Here entity = lcfdatacontext.GetTable<ImplementationType>().Where(getProperty + " =@0", getValue).FirstOrDefault(); //.Where(getProperty & "==" & CStr(getValue)) } //lcfdatacontext.SubmitChanges() //lcfdatacontext.Dispose() return entity; } }         Then I do the following method call in a unit test (all my objectDaos inherit entityDao): [Test] public void getOneByValueOfProperty() { Accomplishment result = accomplishmentDao.getOneByValueOfProperty("AccomplishmentType.Name", "Publication"); Assert.IsNotNull(result); } The above passes (AccomplishmentType has a relationship to accomplishment) Accomplishment result = accomplishmentDao.getOneByValueOfProperty("Description", "Can you hear me now?"); Accomplishment result = accomplishmentDao.getOneByValueOfProperty("LocalId", 4); Both of the above work Accomplishment result = accomplishmentDao.getOneByValueOfProperty("Id", New Guid("95457751-97d9-44b5-8f80-59fc2d170a4c"))       Does not work and says the following: Operator '=' incompatible with operand types 'Guid' and 'Guid Why is this happening? Guid's can't be compared? I tried == as well but same error. What's even moreso confusing is that every example of Dynamic Linq I have seen simply usings strings whether using the parameterized where predicate or this one I have commented out: //.Where(getProperty & "==" & CStr(getValue)) With or without the Cstr, many datatypes don't work with this format. I tried setting the getValue to a string instead of an object as well, but then I just get different errors (such as a multiword string would stop comparison after the first word). What am I missing to make this work with GUIDs and/or any data type? Ideally I would like to be able to just pass in a string for getValue (as I have seen for every other dynamic LINQ example) instead of the object and have it work regardless of the data Type of the column.

    Read the article

  • Any tool(s) for knowing the layout (segments) of running process in Windows?

    - by claws
    I've always been curious about How exactly the process looks in memory? What are the different segments(parts) in it? How exactly will be the program (on the disk) & process (in the memory) are related? My previous question: http://stackoverflow.com/questions/1966920/more-info-on-memory-layout-of-an-executable-program-process In my quest, I finally found a answer. I found this excellent article that cleared most of my queries: http://www.linuxforums.org/articles/understanding-elf-using-readelf-and-objdump_125.html In the above article, author shows how to get different segments of the process (LINUX) & he compares it with its corresponding ELF file. I'm quoting this section here: Courious to see the real layout of process segment? We can use /proc//maps file to reveal it. is the PID of the process we want to observe. Before we move on, we have a small problem here. Our test program runs so fast that it ends before we can even dump the related /proc entry. I use gdb to solve this. You can use another trick such as inserting sleep() before it calls return(). In a console (or a terminal emulator such as xterm) do: $ gdb test (gdb) b main Breakpoint 1 at 0x8048376 (gdb) r Breakpoint 1, 0x08048376 in main () Hold right here, open another console and find out the PID of program "test". If you want the quick way, type: $ cat /proc/`pgrep test`/maps You will see an output like below (you might get different output): [1] 0039d000-003b2000 r-xp 00000000 16:41 1080084 /lib/ld-2.3.3.so [2] 003b2000-003b3000 r--p 00014000 16:41 1080084 /lib/ld-2.3.3.so [3] 003b3000-003b4000 rw-p 00015000 16:41 1080084 /lib/ld-2.3.3.so [4] 003b6000-004cb000 r-xp 00000000 16:41 1080085 /lib/tls/libc-2.3.3.so [5] 004cb000-004cd000 r--p 00115000 16:41 1080085 /lib/tls/libc-2.3.3.so [6] 004cd000-004cf000 rw-p 00117000 16:41 1080085 /lib/tls/libc-2.3.3.so [7] 004cf000-004d1000 rw-p 004cf000 00:00 0 [8] 08048000-08049000 r-xp 00000000 16:06 66970 /tmp/test [9] 08049000-0804a000 rw-p 00000000 16:06 66970 /tmp/test [10] b7fec000-b7fed000 rw-p b7fec000 00:00 0 [11] bffeb000-c0000000 rw-p bffeb000 00:00 0 [12] ffffe000-fffff000 ---p 00000000 00:00 0 Note: I add number on each line as reference. Back to gdb, type: (gdb) q So, in total, we see 12 segment (also known as Virtual Memory Area--VMA). But I want to know about Windows Process & PE file format. Any tool(s) for getting the layout (segments) of running process in Windows? Any other good resources for learning more on this subject? EDIT: Are there any good articles which shows the mapping between PE file sections & VA segments?

    Read the article

  • LINQ aggregate left join on SQL CE

    - by P Daddy
    What I need is such a simple, easy query, it blows me away how much work I've done just trying to do it in LINQ. In T-SQL, it would be: SELECT I.InvoiceID, I.CustomerID, I.Amount AS AmountInvoiced, I.Date AS InvoiceDate, ISNULL(SUM(P.Amount), 0) AS AmountPaid, I.Amount - ISNULL(SUM(P.Amount), 0) AS AmountDue FROM Invoices I LEFT JOIN Payments P ON I.InvoiceID = P.InvoiceID WHERE I.Date between @start and @end GROUP BY I.InvoiceID, I.CustomerID, I.Amount, I.Date ORDER BY AmountDue DESC The best equivalent LINQ expression I've come up with, took me much longer to do: var invoices = ( from I in Invoices where I.Date >= start && I.Date <= end join P in Payments on I.InvoiceID equals P.InvoiceID into payments select new{ I.InvoiceID, I.CustomerID, AmountInvoiced = I.Amount, InvoiceDate = I.Date, AmountPaid = ((decimal?)payments.Select(P=>P.Amount).Sum()).GetValueOrDefault(), AmountDue = I.Amount - ((decimal?)payments.Select(P=>P.Amount).Sum()).GetValueOrDefault() } ).OrderByDescending(row=>row.AmountDue); This gets an equivalent result set when run against SQL Server. Using a SQL CE database, however, changes things. The T-SQL stays almost the same. I only have to change ISNULL to COALESCE. Using the same LINQ expression, however, results in an error: There was an error parsing the query. [ Token line number = 4, Token line offset = 9,Token in error = SELECT ] So we look at the generated SQL code: SELECT [t3].[InvoiceID], [t3].[CustomerID], [t3].[Amount] AS [AmountInvoiced], [t3].[Date] AS [InvoiceDate], [t3].[value] AS [AmountPaid], [t3].[value2] AS [AmountDue] FROM ( SELECT [t0].[InvoiceID], [t0].[CustomerID], [t0].[Amount], [t0].[Date], COALESCE(( SELECT SUM([t1].[Amount]) FROM [Payments] AS [t1] WHERE [t0].[InvoiceID] = [t1].[InvoiceID] ),0) AS [value], [t0].[Amount] - (COALESCE(( SELECT SUM([t2].[Amount]) FROM [Payments] AS [t2] WHERE [t0].[InvoiceID] = [t2].[InvoiceID] ),0)) AS [value2] FROM [Invoices] AS [t0] ) AS [t3] WHERE ([t3].[Date] >= @p0) AND ([t3].[Date] <= @p1) ORDER BY [t3].[value2] DESC Ugh! Okay, so it's ugly and inefficient when run against SQL Server, but we're not supposed to care, since it's supposed to be quicker to write, and the performance difference shouldn't be that large. But it just doesn't work against SQL CE, which apparently doesn't support subqueries within the SELECT list. In fact, I've tried several different left join queries in LINQ, and they all seem to have the same problem. Even: from I in Invoices join P in Payments on I.InvoiceID equals P.InvoiceID into payments select new{I, payments} generates: SELECT [t0].[InvoiceID], [t0].[CustomerID], [t0].[Amount], [t0].[Date], [t1].[InvoiceID] AS [InvoiceID2], [t1].[Amount] AS [Amount2], [t1].[Date] AS [Date2], ( SELECT COUNT(*) FROM [Payments] AS [t2] WHERE [t0].[InvoiceID] = [t2].[InvoiceID] ) AS [value] FROM [Invoices] AS [t0] LEFT OUTER JOIN [Payments] AS [t1] ON [t0].[InvoiceID] = [t1].[InvoiceID] ORDER BY [t0].[InvoiceID] which also results in the error: There was an error parsing the query. [ Token line number = 2, Token line offset = 5,Token in error = SELECT ] So how can I do a simple left join on a SQL CE database using LINQ? Am I wasting my time?

    Read the article

  • How to create Query syntax for multiple DataTable for implementing IN operator of Sql Server

    - by Shantanu Gupta
    I have fetched 3-4 tables by executing my stored procedure. Now they resides on my dataset. I have to maintain this dataset for multiple forms and I am not doing any DML operation on this dataset. Now this dataset contains 4 tables out of which i have to fetch some records to display data. Data stored in tables are in form of one to many relationship. i.e. In case of transactions. N records per record. Then these N records are further mapped to M records of 3rd table. Table 1 MAP_ID GUEST_ID DEPARTMENT_ID PARENT_ID PREFERENCE_ID -------------------- -------------------- -------------------- -------------------- -------------------- 19 61 1 1 5 14 61 1 5 15 15 61 2 4 10 18 61 2 13 23 17 61 2 20 26 16 61 40 40 41 20 62 1 5 14 21 62 1 5 15 22 62 1 6 16 24 62 2 3 4 23 62 2 4 9 27 62 2 13 23 25 62 2 20 24 26 62 2 20 25 28 63 1 1 5 29 63 1 1 8 34 63 1 5 15 30 63 2 4 10 33 63 2 4 11 31 63 2 13 23 32 63 40 40 41 35 65 1 NULL 1 36 65 1 NULL 1 38 68 2 13 22 37 68 2 20 25 39 68 2 23 27 40 92 1 NULL 1 Table 2 Department_ID Department_Name Parent_Id Parent_Name -------------------- ----------------------- --------------- ---------------------------------------------------------------------------------- 1 Food 1, 5, 6 Food, North Indian, South Indian 2 Lodging 3, 4, 13, 20, 23 Room, Floor, Non Air Conditioned, With Balcony, Without Balcony 40 New 40 SubNew TABLE 3 Parent_Id Parent_Name Preference_ID Preference_Name -------------------- ----------------------------------------------- ----------------------- ------------------- NULL NULL NULL NULL 1 Food 5, 8 North Indian, Italian 3 Room 4 Floor 4 Floor 9, 10, 11 First, Second, Third 5 North Indian 14, 15 X, Y 6 South Indian 16 Dosa 13 Non Air Conditioned 22, 23 With Balcony, Without Balcony 20 With Balcony 24, 25, 26 Mountain View, Ocean View, Garden View 23 Without Balcony 27 Mountain View 40 New 41 SubNew I have these 3 tables that are related in some fashion like this. Table 1 will be the master for these 2 tables i.e. table 2 and table 3. I need to query on them as SELECT Department_Id, Department_Name, Parent_Name FROM Table2 WHERE Department_Id in ( SELECT Department_Id FROM Table1 WHERE guest_id=65 ) SELECT Parent_Id, Parent_Name, Preference_Name FROM Table3 WHERE PARENT_ID in ( SELECT parent_id FROM Table1 WHERE guest_id=65 ) Now I need to use these queries on DataTables. So I am using Query Syntax for this and reached up to this point. var dept_list= from dept in DtMapGuestDepartment.AsEnumerable() where dept.Field<long>("PK_GUEST_ID")==long.Parse(63) select dept; This should give me the list of all departments that has guest id =63 Now I want to select all departments_name and parent_name from Table 2 where guest_id=63 i.e. departments that i fetched above. This same case will be followed for Table3. Please suggest how to do this. Thanks for keeping up patience for reading my question.

    Read the article

  • OneToOne JPA / Hibernate eager loading cause N+1 select

    - by Alexandre Lavoie
    I created a method to have multilingual text on different objects without creating field for each languages or tables for each objects types. Now the only problem I've got is N+1 select queries when doing a simple loading. Tables schema : CREATE TABLE `testentities` ( `keyTestEntity` int(11) NOT NULL, `keyMultilingualText` int(11) NOT NULL, PRIMARY KEY (`keyTestEntity`) ) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=utf8; CREATE TABLE `common_multilingualtexts` ( `keyMultilingualText` int(11) NOT NULL auto_increment, PRIMARY KEY (`keyMultilingualText`) ) ENGINE=MyISAM AUTO_INCREMENT=0 DEFAULT CHARSET=utf8; CREATE TABLE `common_multilingualtexts_values` ( `languageCode` varchar(5) NOT NULL, `keyMultilingualText` int(11) NOT NULL, `value` text, PRIMARY KEY (`languageCode`,`keyMultilingualText`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; MultilingualText.java @Entity @Table(name = "common_multilingualtexts") public class MultilingualText implements Serializable { private Integer m_iKeyMultilingualText; private Map<String, String> m_lValues = new HashMap<String, String>(); public void setKeyMultilingualText(Integer p_iKeyMultilingualText) { m_iKeyMultilingualText = p_iKeyMultilingualText; } @Id @GeneratedValue @Column(name = "keyMultilingualText") public Integer getKeyMultilingualText() { return m_iKeyMultilingualText; } public void setValues(Map<String, String> p_lValues) { m_lValues = p_lValues; } @ElementCollection(fetch = FetchType.EAGER) @CollectionTable(name = "common_multilingualtexts_values", joinColumns = @JoinColumn(name = "keyMultilingualText")) @MapKeyColumn(name = "languageCode") @Column(name = "value") public Map<String, String> getValues() { return m_lValues; } public void put(String p_sLanguageCode, String p_sValue) { m_lValues.put(p_sLanguageCode,p_sValue); } public String get(String p_sLanguageCode) { if(m_lValues.containsKey(p_sLanguageCode)) { return m_lValues.get(p_sLanguageCode); } return null; } } And it is used like this on a object (having a foreign key to the multilingual text) : @Entity @Table(name = "testentities") public class TestEntity implements Serializable { private Integer m_iKeyEntity; private MultilingualText m_oText; public void setKeyEntity(Integer p_iKeyEntity) { m_iKeyEntity = p_iKeyEntity; } @Id @GeneratedValue @Column(name = "keyEntity") public Integer getKeyEntity() { return m_iKeyEntity; } public void setText(MultilingualText p_oText) { m_oText = p_oText; } @OneToOne(cascade = CascadeType.ALL) @JoinColumn(name = "keyText") public MultilingualText getText() { return m_oText; } } Now, when doing a simple HQL query : from TestEntity, I get a query selecting TestEntity's and one query for each MultilingualText that need to be loaded on each TestEntity. I've searched a lot and found absolutely no solutions. I have tested : @Fetch(FetchType.JOIN) optional = false @ManyToOne instead of @OneToOne Now I am out of idea!

    Read the article

  • Advice for Architecture Design Logic for software application

    - by Prasad
    Hi, I have a framework of basic to complex set of objects/classes (C++) running into around 500. With some rules and regulations - all these objects can communicate with each other and hence can cover most of the common queries in the domain. My Dream: I want to provide these objects as icons/glyphs (as I learnt recently) on a workspace. All these objects can be dragged/dropped into the workspace. They have to communicate only through their methods(interface) and in addition to few iterative and conditional statements. All these objects are arranged finally to execute a protocol/workflow/dataflow/process. After drawing the flow, the user clicks the Execute/run button. All the user interaction should be multi-touch enabled. The best way to show my dream is : Jeff Han's Multitouch Video. consider Jeff is playing with my objects instead of the google maps. :-) it should be like playing a jigsaw puzzle. Objective: how can I achieve the following while working on this final product: a) the development should be flexible to enable provision for web services b) the development should enable easy web application development c) The development should enable client-server architecture - d) further it should also enable mouse based drag/drop desktop application like Adobe programs etc. I mean to say: I want to economize on investments. Now I list my efforts till now in design : a) Created an Editor (VB) where the user writes (manually) the object / class code b) On Run/Execute, the code is copied into a main() function and passed to interpreter. c) Catch the output and show it in the console. The interpreter can be separated to become a server and the Editor can become the client. This needs lot of standard client-server architecture work. But some how I am not comfortable in the tightness of this system. Without interpreter is there much faster and better embeddable solution to this? - other than writing a special compiler for these objects. Recently learned about AXIS-C++ can help me - looks like - a friend suggested. Is that the way to go ? Here are my questions: (pl. consider me a self taught programmer and NOT my domain) a) From the stage of C++ objects to multi-touch product, how can I make sure I will develop the parallel product/service models as well.? What should be architecture aspects I should consider ? b) What technologies are best suited for this? c) If I am thinking of moving to Cloud Computing, how difficult/ how redundant / how unnecessary my efforts will be ? d) How much time in months would it take to get the first beta ? I take the liberty to ask if any of the experts here are interested in this project, please email me: [email protected] Thank you for any help. Looking forward.

    Read the article

  • .NET JIT Code Cache leaking?

    - by pitchfork
    We have a server component written in .Net 3.5. It runs as service on a Windows Server 2008 Standard Edition. It works great but after some time (days) we notice massive slowdowns and an increased working set. We expected some kind of memory leak and used WinDBG/SOS to analyze dumps of the process. Unfortunately the GC Heap doesn’t show any leak but we noticed that the JIT code heap has grown from 8MB after the start to more than 1GB after a few days. We don’t use any dynamic code generation techniques by our own. We use Linq2SQL which is known for dynamic code generation but we don’t know if it can cause such a problem. The main question is if there is any technique to analyze the dump and check where all this Host Code Heap blocks that are shown in the WinDBG dumps come from? [Update] In the mean time we did some more analysis and had Linq2SQL as probable suspect, especially since we do not use precompiled queries. The following example program creates exactly the same behaviour where more and more Host Code Heap blocks are created over time. using System; using System.Linq; using System.Threading; namespace LinqStressTest { class Program { static void Main(string[] args) { for (int i = 0; i < 100; ++ i) ThreadPool.QueueUserWorkItem(Worker); while(runs < 1000000) { Thread.Sleep(5000); } } static void Worker(object state) { for (int i = 0; i < 50; ++i) { using (var ctx = new DataClasses1DataContext()) { long id = rnd.Next(); var x = ctx.AccountNucleusInfos.Where(an => an.Account.SimPlayers.First().Id == id).SingleOrDefault(); } } var localruns = Interlocked.Add(ref runs, 1); System.Console.WriteLine("Action: " + localruns); ThreadPool.QueueUserWorkItem(Worker); } static Random rnd = new Random(); static long runs = 0; } } When we replace the Linq query with a precompiled one, the problem seems to disappear.

    Read the article

  • Stumbleupon type query...

    - by Chris Denman
    Wow, makes your head spin! I am about to start a project, and although my mySql is OK, I can't get my head around what required for this: I have a table of web addresses. id,url 1,http://www.url1.com 2,http://www.url2.com 3,http://www.url3.com 4,http://www.url4.com I have a table of users. id,name 1,fred bloggs 2,john bloggs 3,amy bloggs I have a table of categories. id,name 1,science 2,tech 3,adult 4,stackoverflow I have a table of categories the user likes as numerical ref relating to the category unique ref. For example: user,category 1,4 1,6 1,7 1,10 2,3 2,4 3,5 . . . I have a table of scores relating to each website address. When a user visits one of these sites and says they like it, it's stored like so: url_ref,category 4,2 4,3 4,6 4,2 4,3 5,2 5,3 . . . So based on the above data, URL 4 would score (in it's own right) as follows: 2=2 3=2 6=1 What I was hoping to do was pick out a random URL from over 2,000,000 records based on the current users interests. So if the logged in user likes categories 1,2,3 then I would like to ORDER BY a score generated based on their interest. If the logged in user likes categories 2 3 and 6 then the total score would be 5. However, if the current logged in user only like categories 2 and 6, the URL score would be 3. So the order by would be in context of the logged in users interests. Think of stumbleupon. I was thinking of using a set of VIEWS to help with sub queries. I'm guessing that all 2,000,000 records will need to be looked at and based on the id of the url it will look to see what scores it has based on each selected category of the current user. So we need to know the user ID and this gets passed into the query as a constant from the start. Ain't got a clue! Chris Denman

    Read the article

  • JDBC CommunicationsException with MySQL Database

    - by Dominik Siebel
    I'm having a little trouble with my MySQL- Connection- Pooling. This is the case: Different jobs are scheduled via Quartz. All jobs connect to different databases which works fine the whole day while the nightly scheduled jobs fail with a CommunicationsException... Quartz-Jobs: Job1 runs 0 0 6,10,14,18 * * ? Job2 runs 0 30 10,18 * * ? Job3 runs 0 0 5 * * ? As you can see the last job runs at 18 taking about 1 hour to run. The first job at 5am is the one that fails. I already tried all kinds of parameter-combinations in my resource config this is the one I am running right now: <!-- Database 1 (MySQL) --> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="100" maxIdle="30" maxWait="10000" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" type="javax.sql.DataSource" name="jdbc/appDbProd" username="****" password="****" url="jdbc:mysql://127.0.0.1:3306/appDbProd?autoReconnect=true&amp;useUnicode=true&amp;characterEncoding=UTF-8" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> <!-- Database 2 (MySQL) --> <Resource auth="Container" driverClassName="com.mysql.jdbc.Driver" maxActive="100" maxIdle="30" maxWait="10000" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" type="javax.sql.DataSource" name="jdbc/prodDbCopy" username="****" password="****" url="jdbc:mysql://127.0.0.1:3306/prodDbCopy?autoReconnect=true&amp;useUnicode=true&amp;characterEncoding=UTF-8" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> <!-- Database 3 (MSSQL)--> <Resource auth="Container" driverClassName="net.sourceforge.jtds.jdbc.Driver" maxActive="30" maxIdle="30" maxWait="100" removeAbandoned="true" removeAbandonedTimeout="60" logAbandoned="true" name="jdbc/catalogDb" username="****" password="****" type="javax.sql.DataSource" url="jdbc:jtds:sqlserver://127.0.0.1:1433;databaseName=catalog;useNdTLMv2=false" testWhileIdle="true" testOnBorrow="true" testOnReturn="true" validationQuery="SELECT 1" timeBetweenEvictionRunsMillis="1800000" /> For obvious reasons I changed IPs, Usernames and Passwords but they can be assumed to be correct, seeing that the application runs successfully the whole day. The most annoying thing is: The first job that runs first queries Database2 successfully but fails to query Database1 for some reason (CommunicationsException): Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The last packet successfully received from the server was 39,376,539 milliseconds ago. The last packet sent successfully to the server was 39,376,539 milliseconds ago. is longer than the server configured value of 'wait_timeout'. You should consider either expiring and/or testing connection validity before use in your application, increasing the server configured values for client timeouts, or using the Connector/J connection property 'autoReconnect=true' to avoid this problem. Any ideas? Thanks!

    Read the article

< Previous Page | 308 309 310 311 312 313 314 315 316 317 318 319  | Next Page >