Search Results

Search found 66801 results on 2673 pages for 'near real time'.

Page 410/2673 | < Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >

  • Having trouble animating Line in D3.js using and array of objects as data

    - by user1731245
    I can't seem to get an animated transition between line graphs when I pass in a new set of data. I am using an array of objects as data like this: [{ clicks: 40 installs: 10 time: "1349474400000" },{ clicks: 61 installs: 3 time: "1349478000000" }]; I am using this code to setup my ranges / axis's var xRange = d3.time.scale().range([0, w]), yRange = d3.scale.linear().range([h , 0]), xAxis = d3.svg.axis().scale(xRange).tickSize(-h).ticks(6).tickSubdivide(false), yAxis = d3.svg.axis().scale(yRange).ticks(5).tickSize(-w).orient("left"); var clicksLine = d3.svg.line() .interpolate("cardinal") .x(function(d){return xRange(d.time)}) .y(function(d){return yRange(d.clicks)}); var clickPath; function drawGraphs(data) { clickPath = svg.append("g") .append("path") .data([data]) .attr("class", "clicks") .attr("d", clicksLine); } function updateGraphs(data) { svg.select('path.clicks') .data([data]) .attr("d", clicksLine) .transition() .duration(500) .ease("linear") } I have tried just about everything to be able to pass in new data and see an animation between graph's. Not sure what I am missing? does it have something to do with using an array of objects instead of just a flat array of numbers as data?

    Read the article

  • Is 1/0 a legal Java expression?

    - by polygenelubricants
    The following compiles fine in my Eclipse: final int j = 1/0; // compiles fine!!! // throws ArithmeticException: / by zero at run-time Java prevents many "dumb code" from even compiling in the first place (e.g. "Five" instanceof Number doesn't compile!), so the fact this didn't even generate as much as a warning was very surprising to me. The intrigue deepens when you consider the fact that constant expressions are allowed to be optimized at compile time: public class Div0 { public static void main(String[] args) { final int i = 2+3; final int j = 1/0; final int k = 9/2; } } Compiled in Eclipse, the above snippet generates the following bytecode (javap -c Div0) Compiled from "Div0.java" public class Div0 extends java.lang.Object{ public Div0(); Code: 0: aload_0 1: invokespecial #8; //Method java/lang/Object."<init>":()V 4: return public static void main(java.lang.String[]); Code: 0: iconst_5 1: istore_1 // "i = 5;" 2: iconst_1 3: iconst_0 4: idiv 5: istore_2 // "j = 1/0;" 6: iconst_4 7: istore_3 // "k = 4;" 8: return } As you can see, the i and k assignments are optimized as compile-time constants, but the division by 0 (which must've been detectable at compile-time) is simply compiled as is. javac 1.6.0_17 behaves even more strangely, compiling silently but excising the assignments to i and k completely out of the bytecode (probably because it determined that they're not used anywhere) but leaving the 1/0 intact (since removing it would cause an entirely different program semantics). So the questions are: Is 1/0 actually a legal Java expression that should compile anytime anywhere? What does JLS say about it? If this is legal, is there a good reason for it? What good could this possibly serve?

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • PHP database simulation

    - by Emdiesse
    I have a PHP script that works by calling items from a database based upon the time they were placed in there and it deletes them if they are older than 5 minutes. Basically, I want to now simulate what would happen if this database was being updated regularly. So I was considering sticking in some code that loads an XML file then goes through and parses that into the database based upon the time data located within a node of the xml data... but the problem there is I want it to continually loop through an enter this data so it'll never actually run the other processes So I was thinking of having another PHP script do that that could do this independantly of the php script that is going to display this data... In theory: I am looking to have a button that I can press and it will then run some php code to load up an XML file from a directory on my web server and then iterate though the data sending the data, to a database, based upon the time within a node in the PHP script and when the script was first called So back to my page that displayed the data... if I continually hit refresh it will contain different results each time because data is being added by the other process and this php script removes the older data when it is refreshed Any information on this? Is there a way I can silently, and safely, run a php script without it being loaded into a browser... like a thread!?

    Read the article

  • Delphi, PGDac vs Zeos, Fetch, Lookup?

    - by durumdara
    Hi! I used Zeos to test to know: is ZTable uses fetch technics, or not? May in the future we migrate our lesser system to PGSQL, and this used now "Table" components (as BDE, but it have an SQL-like server). These tables use real cursors, a "Window" with N record, so lookup is very fast, because the Locate/Lookup is started on server, and only these N records are refreshed, no matter, how many records in the lookup table. PGSQL uses fetch technics as I know, and I tested it with a table (id int, name varchar(100)), and 1 million records. (I also trying this with mysql). The adapter is Zeos. ID, sec to find, allocated memory in bytes on client. MySQL 500000 2,761 113 196 344 1000000 3,214 225 471 232 313800 0,437 225 471 232 328066 0,468 225 471 232 276374 0,390 225 471 232 905984 1,264 225 471 232 260253 0,359 225 471 232 PGSQL 500000 3,042 113 188 184 1000000 3,744 225 463 064 313800 0,436 225 463 064 328066 0,452 225 463 064 276374 0,375 225 463 064 905984 1,295 225 463 064 260253 0,359 225 463 064 142023 0,203 225 463 064 As you see the records are fetched locally, this cause the 225 MB usage, and searches are slow a little, based where is the record we must find. I want to ask more things: a.) Is PGDAC have some technics to we can use the lookups without pay the fetch with memory and secs? b.) Or is PG ODBC driver can help in this problem with ADO? (As I know ADO can use server side cursors)? c.) Have anybody some experience with lookup tables, and performance? Is this critical question or it is not? (With client memory usage too). d.) If no chance to avoid fetch hell with lookups, what we can do? Server Side Joins, and unique code for Lookup field changing without real Lookup? Thanks for your help: dd

    Read the article

  • Trying to use an Xslt for an xml in asp.net

    - by Josemalive
    Hello, i have the following xslt sheet: <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:variable name="nhits" select="Answer[@nhits]"></xsl:variable> <xsl:output method="html" indent="yes"/> <xsl:template match="/"> <div> <xsl:call-template name="resultsnumbertemplate"/> </div> </xsl:template> <xsl:template name="resultsnumbertemplate"> <xsl:value-of select="$nhits"/> matches found </xsl:template> </xsl:stylesheet> And this is the xml that im trying to mix with the previous xslt: <Answer xmlns="exa:com.exalead.search.v10" context="n%3Dsl-ocu%26q%3Dlavadoras" last="9" estimated="false" nmatches="219" nslices="0" nhits="219" start="0"> <time> <Time interrupted="false" overall="32348" parse="0" spell="0" exec="1241" synthesis="15302" cats="14061" kwds="14061"> <sliceTimes>15272 </sliceTimes> </Time> </time> </Answer> Im using a xslcompiledtransform and that's working fine: XslCompiledTransform transformer = new XslCompiledTransform(); transformer.Load(HttpContext.Current.Server.MapPath("xslt\\" + requestvariables["xslsheet"].ToString())); transformer.Transform(xmlreader, null, writer); My problems comes when im trying to put into a variable the "nhits" attribute value placed on the Answer element, but i'm not rendering anything using my xslt sheet. Do you know what could be the cause? Could be the xmlns attribute in my xml file? Thanks in advance. Best Regards. Jose

    Read the article

  • Represent multiple Null/Generic objects in an ActiveRecord association?

    - by slothbear
    I have a Casefile model that belongs_to a Doctor. In additional to all the "real" doctors, there are several generic Doctors: "self-treated", "not specified", and "removed" (it used to have a real doctor, but no longer does). I suspect there will be even more generic values in the future. I started with special "doctors" in the database, generated from seed. The generic Doctors only need to respond to the "name" and "real_doctor?" methods. This worked with one, was strained with two, and now feels completely broken. I want to change the behavior and can't figure out how to test it, a bad sign. Creating all the generic objects for testing is also trouble, including fake values to pass validation of the required Doctor attributes. The Null Object pattern works well for one generic object. The "name" method could check casefile.doctor.nil? and return "self-treated", as demonstrated by Craig Ambrose. What pattern should I use when there are multiple generic objects with very limited state?

    Read the article

  • Merging k sorted linked lists - analysis

    - by Kotti
    Hi! I am thinking about different solutions for one problem. Assume we have K sorted linked lists and we are merging them into one. All these lists together have N elements. The well known solution is to use priority queue and pop / push first elements from every lists and I can understand why it takes O(N log K) time. But let's take a look at another approach. Suppose we have some MERGE_LISTS(LIST1, LIST2) procedure, that merges two sorted lists and it would take O(T1 + T2) time, where T1 and T2 stand for LIST1 and LIST2 sizes. What we do now generally means pairing these lists and merging them pair-by-pair (if the number is odd, last list, for example, could be ignored at first steps). This generally means we have to make the following "tree" of merge operations: N1, N2, N3... stand for LIST1, LIST2, LIST3 sizes O(N1 + N2) + O(N3 + N4) + O(N5 + N6) + ... O(N1 + N2 + N3 + N4) + O(N5 + N6 + N7 + N8) + ... O(N1 + N2 + N3 + N4 + .... + NK) It looks obvious that there will be log(K) of these rows, each of them implementing O(N) operations, so time for MERGE(LIST1, LIST2, ... , LISTK) operation would actually equal O(N log K). My friend told me (two days ago) it would take O(K N) time. So, the question is - did I f%ck up somewhere or is he actually wrong about this? And if I am right, why doesn't this 'divide&conquer' approach can't be used instead of priority queue approach?

    Read the article

  • How can I set drawable to a ListView in android

    - by sxingfeng
    I am writing a app for android 1.5. I want to use a complex listview to display my data. I want to show a ImageView of a drawable object in my List item. I learned from a demo: ------> listData.put("Img", listData.put("Img", R.drawable.XXX)); listData.put("Time", "100"); listItems.add(listData); It can display correctly, however, I want to change Img at runtime, The image maybe generated at run-time, so I change the code as follow, but it falls. Can anyone help me ? many thanks! protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.item_list); itemListView = (ListView) findViewById(R.id.listview); ArrayList<HashMap<String, Object>> listItems = new ArrayList<HashMap<String, Object>>(); for(int i = 0;i <XXX.size(); ++i) { HashMap<String, Object> listData = new HashMap<String, Object>(); ---------> 1) listData.put("Img", new Drawable(XXX)); 2) listData.put("Time", "100"); 3) listItems.add(listData); } SimpleAdapter listItemAdapter = new SimpleAdapter(this, listItems, R.layout.listitem, new String[] { "Img", "Time"}, new int[] { R.id.listitem_img, R.id.listitem_time }); itemListView.setAdapter(listItemAdapter); listitem.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout android:layout_width="fill_parent" xmlns:android="http://schemas.android.com/apk/res/android" android:layout_height="wrap_content" android:paddingBottom="4dip" android:paddingLeft="12dip" android:paddingRight="12dip"> <ImageView android:paddingTop="12dip" android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/listitem_img" /> <TextView android:layout_height="wrap_content" android:textSize="20dip" android:layout_width="wrap_content" android:id="@+id/listitem_time" /> </LinearLayout>

    Read the article

  • Is it just me? I find LINQ to XML to be sort of cumbersome, compared to XPath.

    - by Cheeso
    I am a C# programmer, so I don't get to take advantage of the cool XML syntax in VB. Dim itemList1 = From item In rss.<rss>.<channel>.<item> _ Where item.<description>.Value.Contains("LINQ") Or _ item.<title>.Value.Contains("LINQ") Using C#, I find XPath to be easier to think about, easier to code, easier to understand, than performing a multi-nested select using LINQ to XML. Look at this syntax, it looks like Greek swearing: var waypoints = from waypoint in gpxDoc.Descendants(gpx + "wpt") select new { Latitude = waypoint.Attribute("lat").Value, Longitude = waypoint.Attribute("lon").Value, Elevation = waypoint.Element(gpx + "ele") != null ? waypoint.Element(gpx + "ele").Value : null, Name = waypoint.Element(gpx + "name") != null ? waypoint.Element(gpx + "name").Value : null, Dt = waypoint.Element(gpx + "cmt") != null ? waypoint.Element(gpx + "cmt").Value : null }; All the casting, the heavy syntax, the possibility for NullPointerExceptions. None of this happens with XPath. I like LINQ in general, and I use it on object collections and databases, but my first go-round with querying XML led me right back to XPath. Is it just me? Am I missing something? EDIT: someone voted to close this as "not a real question". But it is a real question, stated clearly. The question is: Am I misunderstanding something with LINQ to XML?

    Read the article

  • How to recalculate all-pairs shorthest paths on-line if nodes are getting removed?

    - by Pavel Shved
    Latest news about underground bombing made me curious about the following problem. Assume we have a weighted undirected graph, nodes of which are sometimes removed. The problem is to re-calculate shortest paths between all pairs of nodes fast after such removals. With a simple modification of Floyd-Warshall algorithm we can calculate shortest paths between all pairs. These paths may be stored in a table, where shortest[i][j] contains the index of the next node on the shortest path between i and j (or NULL value if there's no path). The algorithm requires O(n³) time to build the table, and eacho query shortest(i,j) takes O(1). Unfortunately, we should re-run this algorithm after each removal. The other alternative is to run graph search on each query. This way each removal takes zero time to update an auxiliary structure (because there's none), but each query takes O(E) time. What algorithm can be used to "balance" the query and update time for all-pairs shortest-paths problem when nodes of the graph are being removed?

    Read the article

  • Memory Bandwidth Performance for Modern Machines

    - by porgarmingduod
    I'm designing a real-time system that occasionally has to duplicate a large amount of memory. The memory consists of non-tiny regions, so I expect the copying performance will be fairly close to the maximum bandwidth the relevant components (CPU, RAM, MB) can do. This led me to wonder what kind of raw memory bandwidth modern commodity machine can muster? My aging Core2Duo gives me 1.5 GB/s if I use 1 thread to memcpy() (and understandably less if I memcpy() with both cores simultaneously.) While 1.5 GB is a fair amount of data, the real-time application I'm working on will have have something like 1/50th of a second, which means 30 MB. Basically, almost nothing. And perhaps worst of all, as I add multiple cores, I can process a lot more data without any increased performance for the needed duplication step. But a low-end Core2Due isn't exactly hot stuff these days. Are there any sites with information, such as actual benchmarks, on raw memory bandwidth on current and near-future hardware? Furthermore, for duplicating large amounts of data in memory, are there any shortcuts, or is memcpy() as good as it will get? Given a bunch of cores with nothing to do but duplicate as much memory as possible in a short amount of time, what's the best I can do?

    Read the article

  • How do I programmatically create a bootable CD?

    - by Nicholas Flynt
    I'm using a barebones tutorial as the basis for an OS I'm working on, and it seems to be an older tutorial: it has be compiling the kernel down to a floppy image, and then loading it with GRUB. Basically, I still want to use GRUB, but I'd like to have my OS run from a CD instead. The main reason is that I don't actually have a real floppy drive available (I'm testing in VirtualBox currently) and I thus have no way to test my OS on real hardware. I've been poking around on the net, and I can find lots of utilities that create a bootable CD from a floppy image, but these all seem to require an actual floppy drive, plus it's not really what I'm looking for. I'd like to be able to end up with a bootable CD during my make step ideally, without needing to first place the image on a floppy, which seems rather pointless. I guess the easy way to answer this: How do I set up GRUB to read my kernel image from a CD? Will I need a special utility to do this from Windows? (The kernel can't compile itself yet, that's not for a looong while) Thanks!

    Read the article

  • Are there any prototype-based languages with a whole development cycle?

    - by Kaveh Shahbazian
    Are there any real-world prototype-based programming languages with a whole development cycle? "A whole development cycle" like Ruby and Python: web frameworks, scripting/interacting with the system, tools for debugging, profiling, etc. Thank you A brief note on PBPLs: (let's call these languages PBPL : prototype-based programming language) There are some PBPLs out there. Some are being widely used like JavaScript (which Node.js may bring it into the field - or may not!). One other language is ActionScript which is also a PBPL but tightly bound to Flash VM (is it correct to say so?). From less known ones I can speak of Lua which has a strong reputation in game development (mostly spread by WOW) but never took off as a full language. Lua has a table concept which can provide you some sort of prototype based programming facility. There is also JScript (Windows scripting tool) which is already pointless by the newcomer PowerShell (I have used JScript to manipulate IIS but I never understood what is JScript!). Others can be named like io (indeed very very neat, you will fall in love with it; absolutely impossible to use) and REBOL (What is this all about? A proprietary scripting tool? You must be kidding!) and newLISP (Which is actually a full language, but no one ever heard about it). For sure there are much more to list here but either I do not remember or I did not understood them as a real world thing, like Self).

    Read the article

  • VB .Net - Reflection: Reflected Method from a loaded Assembly executes before calling method. Why?

    - by pu.griffin
    When I am loading an Assembly dynamically, then calling a method from it, I appear to be getting the method from Assembly executing before the code in the method that is calling it. It does not appear to be executing in a Serial manner as I would expect. Can anyone shine some light on why this might be happening. Below is some code to illustrate what I am seeing, the code from the some.dll assembly calls a method named PerformLookup. For testing I put a similar MessageBox type output with "PerformLookup Time: " as the text. What I end up seeing is: First: "PerformLookup Time: 40:842" Second: "initIndex Time: 45:873" Imports System Imports System.Data Imports System.IO Imports Microsoft.VisualBasic.Strings Imports System.Reflection Public Class Class1 Public Function initIndex(indexTable as System.Collections.Hashtable) As System.Data.DataSet Dim writeCode As String MessageBox.Show("initIndex Time: " & Date.Now.Second.ToString() & ":" & Date.Now.Millisecond.ToString()) System.Threading.Thread.Sleep(5000) writeCode = RefreshList() End Function Public Function RefreshList() As String Dim asm As System.Reflection.Assembly Dim t As Type() Dim ty As Type Dim m As MethodInfo() Dim mm As MethodInfo Dim retString as String retString = "" Try asm = System.Reflection.Assembly.LoadFrom("C:\Program Files\some.dll") t = asm.GetTypes() ty = asm.GetType(t(28).FullName) 'known class location m = ty.GetMethods() mm = ty.GetMethod("PerformLookup") Dim o as Object o = Activator.CreateInstance(ty) Dim oo as Object() retString = mm.Invoke(o,Nothing).ToString() Catch Ex As Exception End Try return retString End Function End Class

    Read the article

  • Prototype VS jQuery

    - by aSeptik
    Hi All guys! First of, thank's for your time; then i want go directly to the point by saying that, i don't want to open another "Yet Another Js VS Js" 3d , the web is almost busy of this! I want also make a premise, i have used both theese js frameworks and i love it and i know, that there are a lot of good js frameworks around, maybe better then this two; but, as you know we need to be perfomant and quickly by doing our works, so i want keep this two, that are the most famous and therefore have a great community support! now, if you are going to say me, that the choise depends on what i'm going to do!? then i can think, hey, in the end they are both javascript and they have almost the same methods and functions and for achieve a task they needs almost the same lines of code! So i want hear from you guys, from you that have really used one of theese, for a real Rich Internet Application what are the real points of force and what the weaknesses you find!? Regards.

    Read the article

  • Web-based game in Python + Django and client browser polling

    - by ty
    I am creating a text-based game that implements a basic model in which multiple (10+) players interact with data and one moderator watches them and sets certain environmental statistics that affect gameplay. Recently I have begun to familiarize myself with Django. It seems to me that it would be an excellent tool for creating a game quickly, particularly because the nature of my game depends largely on sets of data (which lends itself quite well to a database). I am wondering how to "push" changes made by the game moderator to the players (for example, the moderator can decide to display an image to all players). The game is turn-based, not real-time, but certain messages need to be pushed out in roughly real-time. My thoughts: I could have each player's browser poll a status periodically (say, every 30 seconds) to see if there is a message from a moderator. But this forces a lag and means different players might receive it at different times. And reducing this interval to <10 seems like a bad idea for the server. Is there a better way to inform clients of changes? Would you suggest something other than using a web framework like Django? Thanks!

    Read the article

  • Communication between EJB3 Instances (JEE inter-bean communication) possible?

    - by Hank
    I'm designing a part of a JEE6 application, consisting of EJB3 beans. Part of the requirements are multiple parallel (say a few hundred) long running (over days) database hunts. Individual hunts have different search parameters (start time, end time, query filter). Parameters may get changed over time. Currently I'm thinking of the following: SearchController (Stateless Session Bean) formulates a set of search parameters, sends it off to a SearchListener via JMS SearchListener (Message Driven Bean) receives search parameters, instantiates a SearchWorker with the parameters SearchWorker (SLSB) hunts repeatedly through the database; when it finds something, the result is sent off via JMS, and the search continues; when the given 'end-time' has reached, it ends What I'm wondering now: Is there a problem, with EJB3 instances running for days? (Other than that I need to be able to deal with container restarts...) How do I know how many and which EJB instances of SearchWorker are currently running? Is it possible to communicate with them individually (similar to sending a System V signal to a unix process), e.g. to send new parameters, to end an instance, etc..

    Read the article

  • I write barely functional scripts that tend to not be resuable and make the baby jesus cry. Please h

    - by maxxpower
    I received a request to add around 100 users to a linux box the users are already in ldap so I can't just use newusers and point it at a text file. Another admin is taking care of the ldap piece so all I have to do is create all the home directories and chown them to the correct user once he adds the users to the box. creating the directories isn't a problem, but I'd like a more elegant script for chowning them to the correct user. what I have currently basically looks like chown -R testuser1 testgroup1 /home/tetsuser1; chown -R testuser2 testgroup2 /home/testgroup2; chown -R testsuser3 testgroup1 /home/testuser3 bascially I took the request that the user name and group name popped it into excel added a column of "chown -R" to the front, then added a column of "/", copied and pasted the username column after it and then added a column of ";" and dragged it down to the second to last row. Popped it into notepad ran some quick find and replaces and in less than a minute I have a completed request and a sad empty feeling. I know this was a really ghetto method and I'm trying to get away from using excel to avoid learning new scripting techniques so here's my real question. tl;dr I made 100 home directories and chowned them to the correct users, but it was ugly. Actual question below. You have a file named idlist that looks like this (only with say 1000 users and real usernames and groups) testuser1 testgroup1 testuser2 testgroup2 testuser3 testgroup1 write a script that creates home directories for all the users and chowns the created directories to the correct user and group. To make the directories I used the following(feel free to flame/correct me on this as well. ) var= 'cut -f1 -d" " idlist' (I used backticks not apostrophes around the cut command) mkdir $var

    Read the article

  • How to design Models the correct way: Object-oriented or "Package"-oriented?

    - by ajsie
    I know that in OOP you want every object (from a class) to be a "thing", eg. user, validator etc. I know the basics about MVC, how they different parts interact with each other. However, i wonder if the models in MVC should be designed according to the traditional OOP design, that is to say, should every model be a database/table/row (solution 2)? Or is the intention more like to collect methods that are affecting the same table or a bunch of related tables (solution 1). example for an Address book module in CodeIgniter, where i want be able to "CRUD" a Contact and add/remove it to/from a CRUD-able Contact Group. Models solution 1: bunching all related methods together (not real object, rather a "package") class Contacts extends Model { function create_contact() {) function read_contact() {} function update_contact() {} function delete_contact() {} function add_contact_to_group() {} function delete_contact_from_group() {} function create_group() {} function read_group() {} function update_group() {} function delete_group() {} } Models solution 2: the OOP way (one class per file) class Contact extends Model { private $name = ''; private $id = ''; function create_contact() {) function read_contact() {} function update_contact() {} function delete_contact() {} } class ContactGroup extends Model { private $name = ''; private $id = ''; function add_contact_to_group() {} function delete_contact_from_group() {} function create_group() {} function read_group() {} function update_group() {} function delete_group() {} } i dont know how to think when i want to create the models. and the above examples are my real tasks for creating an Address book. Should i just bunch all functions together in one class. then the class contains different logic (contact and group), so it can not hold properties that are specific for either one of them. the solution 2 works according to the OOP. but i dont know why i should make such a dividing. what would the benefits be to have a Contact object for example. Its surely not a User object, so why should a Contact "live" with its own state (properties and methods). you experienced guys with OOP/MVC, please shed a light on how one should think here in this very concrete task.

    Read the article

  • General many-to-many relationship problem ( Postgresql )

    - by David
    Hi, i have two tables: CREATE TABLE "public"."auctions" ( "id" VARCHAR(255) NOT NULL, "auction_value_key" VARCHAR(255) NOT NULL, "ctime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, "mtime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, CONSTRAINT "pk_XXXX2" PRIMARY KEY("id"), ); and CREATE TABLE "public"."auction_values" ( "id" NUMERIC DEFAULT nextval('default_seq'::regclass) NOT NULL, "fk_auction_value_key" VARCHAR(255) NOT NULL, "key" VARCHAR(255) NOT NULL, "value" TEXT, "ctime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, "mtime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, CONSTRAINT "pk_XXXX1" PRIMARY KEY("id"), ); if i want to create a many-to-many relationship on the auction_value_key like this: ALTER TABLE "public"."auction_values" ADD CONSTRAINT "auction_values_fk" FOREIGN KEY ("fk_auction_value_key") REFERENCES "public"."auctions"("auction_value_key") ON DELETE NO ACTION ON UPDATE NO ACTION NOT DEFERRABLE; i get this SQL error: ERROR: there is no unique constraint matching given keys for referenced table "auctions" Question: As you might see, i want "auction_values" to be be "reused" by different auctions without duplicating them for every auction... So i don't want a key relation on the "id" field in the auctions table... Am i thinking wrong here or what is the deal? ;) Thanks

    Read the article

  • A more elegant way to start a multithread alarm in Rebol VID ? (What's the equivalent of load event?

    - by Rebol Tutorial
    I want to trigger an alarm when the GUI starts. I can't see what's the equivalent of load event of other language in Rebol VID, so I put it in the periodic handler which is quite circumvoluted. So how to do this more cleanly ? alarm-data: none set-alarm: func [ "Set alarm for future time." seconds "Seconds from now to ring alarm." message [string! unset!] "Message to print on alarm." ] [ alarm-data: reduce [now/time + seconds message] ] ring: func [ "Action for when alarm comes due." message [string! unset!] ] [ set-face monitor either message [message]["RIIIING!"] ; Your sound playing can also go here (my computer doesn't have speakers). ] periodic: func [ "Called every second, checks alarms." fact action event ] [ either alarm-data [ ; Update alarm countdown. set-face monitor rejoin [ "Alarm will ring in " to integer! alarm-data/1 - now/time " seconds." ] ; Check alarm. if now/time > alarm-data/1 [ ring alarm-data/2 ;alarm-data: none ; Reset once fired. ] ][ either value? 'message [ set-alarm seconds message ][ set-alarm seconds "Alarm triggered!" ] ] ] alarm: func[seconds message [string! unset!]][ system/words/seconds: seconds if value? 'message [ system/words/message: message ] view layout [ monitor: text 256 "" rate 1 feel [engage: :periodic] button 256 "re/start countdown" [ either value? 'message [ set-alarm seconds message ][ set-alarm seconds "Alarm triggered!" ] set-face monitor "Alarm set." ] ] ]

    Read the article

  • Problem with fork exec kill when redirecting output in perl

    - by Edu
    I created a script in perl to run programs with a timeout. If the program being executed takes longer then the timeout than the script kills this program and returns the message "TIMEOUT". The script worked quite well until I decided to redirect the output of the executed program. When the stdout and stderr are being redirected, the program executed by the script is not being killed because it has a pid different than the one I got from fork. It seems perl executes a shell that executes my program in the case of redirection. I would like to have the output redirection but still be able to kill the program in the case of a timeout. Any ideas on how I could do that? A simplified code of my script is: #!/usr/bin/perl use strict; use warnings; use POSIX ":sys_wait_h"; my $timeout = 5; my $cmd = "very_long_program 1>&2 > out.txt"; my $pid = fork(); if( $pid == 0 ) { exec($cmd) or print STDERR "Couldn't exec '$cmd': $!"; exit(2); } my $time = 0; my $kid = waitpid($pid, WNOHANG); while ( $kid == 0 ) { sleep(1); $time ++; $kid = waitpid($pid, WNOHANG); print "Waited $time sec, result $kid\n"; if ($timeout > 0 && $time > $timeout) { print "TIMEOUT!\n"; #Kill process kill 9, $pid; exit(3); } } if ( $kid == -1) { print "Process did not exist\n"; exit(4); } print "Process exited with return code $?\n"; exit($?); Thanks for any help.

    Read the article

  • Extracting note onset from MIDI

    - by Dolphin
    Hi I need to extract musical features (note details-pitch, duration, rhythm, loudness, note start time) from a polyphonic (having 2 scores for treble and bass - bass may also have chords) MIDI file. I'm using the jMusic API to extract these details from a MIDI file. My approach is to go through each score, into parts, then phrases and finally notes and extract the details. With my approach, it's reading all the treble notes first and then the bass notes - but chords are not captured (i.e. only a single note of the chord is taken), and I cannot identify from which point onwards are the bass notes. So what I tried was to get the note onsets (i.e. the start time of note being played) - since the starting time of both the treble and bass notes at the start of the piece should be same - But I cannot extract the note onset using jMusic API. Each time it shows 0.0. Is there any way I can identify the voice (treble or bass) of a note? And also all the notes of a chord? How is the voice or note onset for each note stored in MIDI? Is this different for each MIDI file? Any insight is greatly appreciated. Thanks in advance

    Read the article

  • Oracle syntax - should we have to choose between the old and the new?

    - by Martin Milan
    Hi, I work on a code base in the region of about 1'000'000 lines of source, in a team of around eight developers. Our code is basically an application using an Oracle database, but the code has evolved over time (we have plenty of source code from the mid nineties in there!). A dispute has arisen amongst the team over the syntax that we are using for querying the Oracle database. At the moment, the overwhelming majority of our queries use the "old" Oracle Syntax for joins, meaning we have code that looks like this... Example of Inner Join select customers.*, orders.date, orders.value from customers, orders where customers.custid = orders.custid Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers, contacts where customers.custid = contacts.custid(+) As new developers have joined the team, we have noticed that some of them seem to prefer using SQL-92 queries, like this: Example of Inner Join select customers.*, orders.date, orders.value from customers inner join orders on (customers.custid = orders.custid) Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers left join contacts on (customers.custid = contacts.custid) Group A say that everyone should be using the the "old" syntax - we have lots of code in this format, and we ought to value consistency. We don't have time to go all the way through the code now rewriting database queries, and it wouldn't pay us if we had. They also point out that "this is the way we've always done it, and we're comfortable with it..." Group B however say that they agree that we don't have the time to go back and change existing queries, we really ought to be adopting the "new" syntax on code that we write from here on in. They say that developers only really look at a single query at a time, and that so long as developers know both syntax there is nothing to be gained from rigidly sticking to the old syntax, which might be deprecated at some point in the future. Without declaring with which group my loyalties lie, I am interested in hearing the opinions of impartial observers - so let the games commence! Martin. Ps. I've made this a community wiki so as not to be seen as just blatantly chasing after question points...

    Read the article

< Previous Page | 406 407 408 409 410 411 412 413 414 415 416 417  | Next Page >