Search Results

Search found 59272 results on 2371 pages for 'time stamp'.

Page 318/2371 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Why do I have to unserialize this serialized value twice? (wordpress/bbpress maybe_serialize)

    - by 55skidoo
    What am I doing wrong here? I'm serializing a value, storing it in a database (table bb_meta), retrieving it... OK so far... but then I have to unserialize it twice. Shouldn't I be able to just unserialize once? This seems to work, but I'm wondering what point about serialization I'm missing here. //check database to see if user has ever visited before. $querystring = $bbdb->prepare( "SELECT `meta_value` FROM `$bbdb->meta` WHERE `object_type` = %s AND `object_id` = %s AND `meta_key` = %s LIMIT 1", $bbtype, $bb_this_thread, $bbuser ); $bb_last_visits = $bbdb->get_row($querystring, OBJECT); //if $bb_last_visits is empty, add time() as the metavalue using bb_update_meta if (empty($bb_last_visits)) { $first_visit = time(); echo 'serialized first visit: ' . $bb_this_visit_time_serialized = serialize(array($bb_this_thread => $first_visit)); bb_update_meta( $bb_this_thread, $bbuser, $bb_this_visit_time_serialized, $bbtype ); //add to database, bb_meta table echo '$bb_last_visits was empty. Setting first visit time as ' . $bb_this_visit_time_serialized . '<br>'; } else { //else, test by unserializing the data for use. echo 'last visit time already set: '; echo $bb_last_visits->meta_value; echo '<br>'; //fatal error - echo 'unserialized: ' . $bb_last_visits_unserialized = unserialize($bb_last_visits[0]->meta_value); echo '<br>'; echo 'unserialize: ' . $unserialized_visits = unserialize($bb_last_visits->meta_value); echo '<br>'; echo 'hmm, need to unserialize again??: '; echo $unserialized_unserialized_visits = unserialize($unserialized_visits); echo '<br>'; echo 'hey look, it\'s an array value I can finally use now. phew: ' . $unserialized_unserialized_visits[$bb_this_thread]; }

    Read the article

  • Python MD5 Hash Faster Calculation

    - by balgan
    Hi everyone. I will try my best to explain my problem and my line of thought on how I think I can solve it. I use this code for root, dirs, files in os.walk(downloaddir): for infile in files: f = open(os.path.join(root,infile),'rb') filehash = hashlib.md5() while True: data = f.read(10240) if len(data) == 0: break filehash.update(data) print "FILENAME: " , infile print "FILE HASH: " , filehash.hexdigest() and using start = time.time() elapsed = time.time() - start I measure how long it takes to calculate an hash. Pointing my code to a file with 653megs this is the result: root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.624 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.373 root@Mars:/home/tiago# python algorithm-timer.py FILENAME: freebsd.iso FILE HASH: ace0afedfa7c6e0ad12c77b6652b02ab 12.540 Ok now 12 seconds +- on a 653mb file, my problem is I intend to use this code on a program that will run through multiple files, some of them might be 4/5/6Gb and it will take wayy longer to calculate. What am wondering is if there is a faster way for me to calculate the hash of the file? Maybe by doing some multithreading? I used a another script to check the use of the CPU second by second and I see that my code is only using 1 out of my 2 CPUs and only at 25% max, any way I can change this? Thank you all in advance for the given help.

    Read the article

  • How Search Engine Bots Crawl Forums?

    - by Waleed Eissa
    If I have a forums site with a large number of threads, will the search engine bot crawl the whole site every time? Say I have over 1,000,000 threads in my site, will they get crawled every time the bot crawls my site? or how does it work? I want my website to be indexed but I don't want the bot to kill my website! In other words I don't want the bot to keep crawling the old threads again and again every time it crawls my website. Also, what about the pages crawled before? Will the bot request them every time it crawls my website to make sure they are still on the site? I'm asking this because I only link to the latest threads, i.e. there's a page that contains a list of all the latest threads, but I don't link to the older threads, they have to be explicitly requested by URL, e.g. http://www.mysite.com/showthread.aspx?threadid=7 , will this work to stop the bot from bringing my site down and consuming all my bandwidth? P.S. The site is still under development but I want to know in order to design the site so that search engine bots don't bring it down. Thanks

    Read the article

  • Having trouble animating Line in D3.js using and array of objects as data

    - by user1731245
    I can't seem to get an animated transition between line graphs when I pass in a new set of data. I am using an array of objects as data like this: [{ clicks: 40 installs: 10 time: "1349474400000" },{ clicks: 61 installs: 3 time: "1349478000000" }]; I am using this code to setup my ranges / axis's var xRange = d3.time.scale().range([0, w]), yRange = d3.scale.linear().range([h , 0]), xAxis = d3.svg.axis().scale(xRange).tickSize(-h).ticks(6).tickSubdivide(false), yAxis = d3.svg.axis().scale(yRange).ticks(5).tickSize(-w).orient("left"); var clicksLine = d3.svg.line() .interpolate("cardinal") .x(function(d){return xRange(d.time)}) .y(function(d){return yRange(d.clicks)}); var clickPath; function drawGraphs(data) { clickPath = svg.append("g") .append("path") .data([data]) .attr("class", "clicks") .attr("d", clicksLine); } function updateGraphs(data) { svg.select('path.clicks') .data([data]) .attr("d", clicksLine) .transition() .duration(500) .ease("linear") } I have tried just about everything to be able to pass in new data and see an animation between graph's. Not sure what I am missing? does it have something to do with using an array of objects instead of just a flat array of numbers as data?

    Read the article

  • PHP database simulation

    - by Emdiesse
    I have a PHP script that works by calling items from a database based upon the time they were placed in there and it deletes them if they are older than 5 minutes. Basically, I want to now simulate what would happen if this database was being updated regularly. So I was considering sticking in some code that loads an XML file then goes through and parses that into the database based upon the time data located within a node of the xml data... but the problem there is I want it to continually loop through an enter this data so it'll never actually run the other processes So I was thinking of having another PHP script do that that could do this independantly of the php script that is going to display this data... In theory: I am looking to have a button that I can press and it will then run some php code to load up an XML file from a directory on my web server and then iterate though the data sending the data, to a database, based upon the time within a node in the PHP script and when the script was first called So back to my page that displayed the data... if I continually hit refresh it will contain different results each time because data is being added by the other process and this php script removes the older data when it is refreshed Any information on this? Is there a way I can silently, and safely, run a php script without it being loaded into a browser... like a thread!?

    Read the article

  • Is 1/0 a legal Java expression?

    - by polygenelubricants
    The following compiles fine in my Eclipse: final int j = 1/0; // compiles fine!!! // throws ArithmeticException: / by zero at run-time Java prevents many "dumb code" from even compiling in the first place (e.g. "Five" instanceof Number doesn't compile!), so the fact this didn't even generate as much as a warning was very surprising to me. The intrigue deepens when you consider the fact that constant expressions are allowed to be optimized at compile time: public class Div0 { public static void main(String[] args) { final int i = 2+3; final int j = 1/0; final int k = 9/2; } } Compiled in Eclipse, the above snippet generates the following bytecode (javap -c Div0) Compiled from "Div0.java" public class Div0 extends java.lang.Object{ public Div0(); Code: 0: aload_0 1: invokespecial #8; //Method java/lang/Object."<init>":()V 4: return public static void main(java.lang.String[]); Code: 0: iconst_5 1: istore_1 // "i = 5;" 2: iconst_1 3: iconst_0 4: idiv 5: istore_2 // "j = 1/0;" 6: iconst_4 7: istore_3 // "k = 4;" 8: return } As you can see, the i and k assignments are optimized as compile-time constants, but the division by 0 (which must've been detectable at compile-time) is simply compiled as is. javac 1.6.0_17 behaves even more strangely, compiling silently but excising the assignments to i and k completely out of the bytecode (probably because it determined that they're not used anywhere) but leaving the 1/0 intact (since removing it would cause an entirely different program semantics). So the questions are: Is 1/0 actually a legal Java expression that should compile anytime anywhere? What does JLS say about it? If this is legal, is there a good reason for it? What good could this possibly serve?

    Read the article

  • Slowdowns when reading from an urlconnection's inputstream (even with byte[] and buffers)

    - by user342677
    Ok so after spending two days trying to figure out the problem, and reading about dizillion articles, i finally decided to man up and ask to for some advice(my first time here). Now to the issue at hand - I am writing a program which will parse api data from a game, namely battle logs. There will be A LOT of entries in the database(20+ million) and so the parsing speed for each battle log page matters quite a bit. The pages to be parsed look like this: http://api.erepublik.com/v1/feeds/battle_logs/10000/0. (see source code if using chrome, it doesnt display the page right). It has 1000 hit entries, followed by a little battle info(lastpage will have <1000 obviously). On average, a page contains 175000 characters, UTF-8 encoding, xml format(v 1.0). Program will run locally on a good PC, memory is virtually unlimited(so that creating byte[250000] is quite ok). The format never changes, which is quite convenient. Now, I started off as usual: //global vars,class declaration skipped public WebObject(String url_string, int connection_timeout, int read_timeout, boolean redirects_allowed, String user_agent) throws java.net.MalformedURLException, java.io.IOException { // Open a URL connection java.net.URL url = new java.net.URL(url_string); java.net.URLConnection uconn = url.openConnection(); if (!(uconn instanceof java.net.HttpURLConnection)) { throw new java.lang.IllegalArgumentException("URL protocol must be HTTP"); } conn = (java.net.HttpURLConnection) uconn; conn.setConnectTimeout(connection_timeout); conn.setReadTimeout(read_timeout); conn.setInstanceFollowRedirects(redirects_allowed); conn.setRequestProperty("User-agent", user_agent); } public void executeConnection() throws IOException { try { is = conn.getInputStream(); //global var l = conn.getContentLength(); //global var } catch (Exception e) { //handling code skipped } } //getContentStream and getLength methods which just return'is' and 'l' are skipped Here is where the fun part began. I ran some profiling (using System.currentTimeMillis()) to find out what takes long ,and what doesnt. The call to this method takes only 200ms on avg public InputStream getWebPageAsStream(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; WebObject wobj = new WebObject(url, 10000, 10000, true, "Mozilla/5.0 " + "(Windows; U; Windows NT 5.1; en-US; rv:1.9.2.3) Gecko/20100401 Firefox/3.6.3 ( .NET CLR 3.5.30729)"); wobj.executeConnection(); l = wobj.getContentLength(); // global variable return wobj.getContentStream(); //returns 'is' stream } 200ms is quite expected from a network operation, and i am fine with it. BUT when i parse the inputStream in any way(read it into string/use java XML parser/read it into another ByteArrayStream) the process takes over 1000ms! for example, this code takes 1000ms IF i pass the stream i got('is') above from getContentStream() directly to this method: public static Document convertToXML(InputStream is) throws ParserConfigurationException, IOException, SAXException { DocumentBuilderFactory dbf = DocumentBuilderFactory.newInstance(); DocumentBuilder db = dbf.newDocumentBuilder(); Document doc = db.parse(is); doc.getDocumentElement().normalize(); return doc; } this code too, takes around 920ms IF the initial InputStream 'is' is passed in(dont read into the code itself - it just extracts the data i need by directly counting the characters, which can be done thanks to the rigid api feed format): public static parsedBattlePage convertBattleToXMLWithoutDOM(InputStream is) throws IOException { // Point A BufferedReader br = new BufferedReader(new InputStreamReader(is)); LinkedList ll = new LinkedList(); String str = br.readLine(); while (str != null) { ll.add(str); str = br.readLine(); } if (((String) ll.get(1)).indexOf("error") != -1) { return new parsedBattlePage(null, null, true, -1); } //Point B Iterator it = ll.iterator(); it.next(); it.next(); it.next(); it.next(); String[][] hits_arr = new String[1000][4]; String t_str = (String) it.next(); String tmp = null; int j = 0; for (int i = 0; t_str.indexOf("time") != -1; i++) { hits_arr[i][0] = t_str.substring(12, t_str.length() - 11); tmp = (String) it.next(); hits_arr[i][1] = tmp.substring(14, tmp.length() - 9); tmp = (String) it.next(); hits_arr[i][2] = tmp.substring(15, tmp.length() - 10); tmp = (String) it.next(); hits_arr[i][3] = tmp.substring(18, tmp.length() - 13); it.next(); it.next(); t_str = (String) it.next(); j++; } String[] b_info_arr = new String[9]; int[] space_nums = {13, 10, 13, 11, 11, 12, 5, 10, 13}; for (int i = 0; i < space_nums.length; i++) { tmp = (String) it.next(); b_info_arr[i] = tmp.substring(space_nums[i] + 4, tmp.length() - space_nums[i] - 1); } //Point C return new parsedBattlePage(hits_arr, b_info_arr, false, j); } I have tried replacing the default BufferedReader with BufferedReader br = new BufferedReader(new InputStreamReader(is), 250000); This didnt change much. My second try was to replace the code between A and B with: Iterator it = IOUtils.lineIterator(is, "UTF-8"); Same result, except this time A-B was 0ms, and B-C was 1000ms, so then every call to it.next() must have been consuming some significant time.(IOUtils is from apache-commons-io library). And here is the culprit - the time taken to parse the stream to string, be it by an iterator or BufferedReader in ALL cases was about 1000ms, while the rest of the code took 0ms(e.g. irrelevant). This means that parsing the stream to LinkedList, or iterating over it, for some reason was eating up a lot of my system resources. question was - why? Is it just the way java is made...no...thats just stupid, so I did another experiment. In my main method I added after the getWebPageAsStream(): //Point A ba = new byte[l]; // 'l' comes from wobj.getContentLength above bytesRead = is.read(ba); //'is' is our URLConnection original InputStream offset = bytesRead; while (bytesRead != -1) { bytesRead = is.read(ba, offset - 1, l - offset); offset += bytesRead; } //Point B InputStream is2 = new ByteArrayInputStream(ba); //Now just working with 'is2' - the "copied" stream The InputStream-byte[] conversion took again 1000ms - this is the way many ppl suggested to read an InputStream, and stil it is slow. And guess what - the 2 parser methods above (convertToXML() and convertBattlePagetoXMLWithoutDOM(), when passed 'is2' instead of 'is' took, in all 4 cases, under 50ms to complete. I read a suggestion that the stream waits for connection to close before unblocking, so i tried using HttpComponentsClient 4.0 (http://hc.apache.org/httpcomponents-client/index.html) instead, but the initial InputStream took just as long to parse. e.g. this code: public InputStream getWebPageAsStream2(int battle_id, int page) throws Exception { String url = "http://api.erepublik.com/v1/feeds/battle_logs/" + battle_id + "/" + page; HttpClient httpclient = new DefaultHttpClient(); HttpGet httpget = new HttpGet(url); HttpParams p = new BasicHttpParams(); HttpConnectionParams.setSocketBufferSize(p, 250000); HttpConnectionParams.setStaleCheckingEnabled(p, false); HttpConnectionParams.setConnectionTimeout(p, 5000); httpget.setParams(p); HttpResponse response = httpclient.execute(httpget); HttpEntity entity = response.getEntity(); l = (int) entity.getContentLength(); return entity.getContent(); } took even longer to process(50ms more for just the network) and the stream parsing times remained the same. Obviously it can be instantiated so as to not create HttpClient and properties every time(faster network time), but the stream issue wont be affected by that. So we come to the center problem - why does the initial URLConnection InputStream(or HttpClient InputStream) take so long to process, while any stream of same size and content created locally is orders of magnitude faster? I mean, the initial response is already somewhere in RAM, and I cant see any good reasong why it is processed so slowly compared to when a same stream is just created from a byte[]. Considering I have to parse million of entries and thousands of pages like that, a total processing time of almost 1.5s/page seems WAY WAY too long. Any ideas? P.S. Please ask in any more code is required - the only thing I do after parsing is make a PreparedStatement and put the entries into JavaDB in packs of 1000+, and the perfomance is ok ~ 200ms/1000entries, prb could be optimized with more cache but I didnt look into it much.

    Read the article

  • Trying to use an Xslt for an xml in asp.net

    - by Josemalive
    Hello, i have the following xslt sheet: <?xml version="1.0" encoding="UTF-8" ?> <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0"> <xsl:variable name="nhits" select="Answer[@nhits]"></xsl:variable> <xsl:output method="html" indent="yes"/> <xsl:template match="/"> <div> <xsl:call-template name="resultsnumbertemplate"/> </div> </xsl:template> <xsl:template name="resultsnumbertemplate"> <xsl:value-of select="$nhits"/> matches found </xsl:template> </xsl:stylesheet> And this is the xml that im trying to mix with the previous xslt: <Answer xmlns="exa:com.exalead.search.v10" context="n%3Dsl-ocu%26q%3Dlavadoras" last="9" estimated="false" nmatches="219" nslices="0" nhits="219" start="0"> <time> <Time interrupted="false" overall="32348" parse="0" spell="0" exec="1241" synthesis="15302" cats="14061" kwds="14061"> <sliceTimes>15272 </sliceTimes> </Time> </time> </Answer> Im using a xslcompiledtransform and that's working fine: XslCompiledTransform transformer = new XslCompiledTransform(); transformer.Load(HttpContext.Current.Server.MapPath("xslt\\" + requestvariables["xslsheet"].ToString())); transformer.Transform(xmlreader, null, writer); My problems comes when im trying to put into a variable the "nhits" attribute value placed on the Answer element, but i'm not rendering anything using my xslt sheet. Do you know what could be the cause? Could be the xmlns attribute in my xml file? Thanks in advance. Best Regards. Jose

    Read the article

  • Merging k sorted linked lists - analysis

    - by Kotti
    Hi! I am thinking about different solutions for one problem. Assume we have K sorted linked lists and we are merging them into one. All these lists together have N elements. The well known solution is to use priority queue and pop / push first elements from every lists and I can understand why it takes O(N log K) time. But let's take a look at another approach. Suppose we have some MERGE_LISTS(LIST1, LIST2) procedure, that merges two sorted lists and it would take O(T1 + T2) time, where T1 and T2 stand for LIST1 and LIST2 sizes. What we do now generally means pairing these lists and merging them pair-by-pair (if the number is odd, last list, for example, could be ignored at first steps). This generally means we have to make the following "tree" of merge operations: N1, N2, N3... stand for LIST1, LIST2, LIST3 sizes O(N1 + N2) + O(N3 + N4) + O(N5 + N6) + ... O(N1 + N2 + N3 + N4) + O(N5 + N6 + N7 + N8) + ... O(N1 + N2 + N3 + N4 + .... + NK) It looks obvious that there will be log(K) of these rows, each of them implementing O(N) operations, so time for MERGE(LIST1, LIST2, ... , LISTK) operation would actually equal O(N log K). My friend told me (two days ago) it would take O(K N) time. So, the question is - did I f%ck up somewhere or is he actually wrong about this? And if I am right, why doesn't this 'divide&conquer' approach can't be used instead of priority queue approach?

    Read the article

  • How can I set drawable to a ListView in android

    - by sxingfeng
    I am writing a app for android 1.5. I want to use a complex listview to display my data. I want to show a ImageView of a drawable object in my List item. I learned from a demo: ------> listData.put("Img", listData.put("Img", R.drawable.XXX)); listData.put("Time", "100"); listItems.add(listData); It can display correctly, however, I want to change Img at runtime, The image maybe generated at run-time, so I change the code as follow, but it falls. Can anyone help me ? many thanks! protected void onCreate(Bundle savedInstanceState) { // TODO Auto-generated method stub super.onCreate(savedInstanceState); setContentView(R.layout.item_list); itemListView = (ListView) findViewById(R.id.listview); ArrayList<HashMap<String, Object>> listItems = new ArrayList<HashMap<String, Object>>(); for(int i = 0;i <XXX.size(); ++i) { HashMap<String, Object> listData = new HashMap<String, Object>(); ---------> 1) listData.put("Img", new Drawable(XXX)); 2) listData.put("Time", "100"); 3) listItems.add(listData); } SimpleAdapter listItemAdapter = new SimpleAdapter(this, listItems, R.layout.listitem, new String[] { "Img", "Time"}, new int[] { R.id.listitem_img, R.id.listitem_time }); itemListView.setAdapter(listItemAdapter); listitem.xml <?xml version="1.0" encoding="utf-8"?> <LinearLayout android:layout_width="fill_parent" xmlns:android="http://schemas.android.com/apk/res/android" android:layout_height="wrap_content" android:paddingBottom="4dip" android:paddingLeft="12dip" android:paddingRight="12dip"> <ImageView android:paddingTop="12dip" android:layout_width="wrap_content" android:layout_height="wrap_content" android:id="@+id/listitem_img" /> <TextView android:layout_height="wrap_content" android:textSize="20dip" android:layout_width="wrap_content" android:id="@+id/listitem_time" /> </LinearLayout>

    Read the article

  • How to recalculate all-pairs shorthest paths on-line if nodes are getting removed?

    - by Pavel Shved
    Latest news about underground bombing made me curious about the following problem. Assume we have a weighted undirected graph, nodes of which are sometimes removed. The problem is to re-calculate shortest paths between all pairs of nodes fast after such removals. With a simple modification of Floyd-Warshall algorithm we can calculate shortest paths between all pairs. These paths may be stored in a table, where shortest[i][j] contains the index of the next node on the shortest path between i and j (or NULL value if there's no path). The algorithm requires O(n³) time to build the table, and eacho query shortest(i,j) takes O(1). Unfortunately, we should re-run this algorithm after each removal. The other alternative is to run graph search on each query. This way each removal takes zero time to update an auxiliary structure (because there's none), but each query takes O(E) time. What algorithm can be used to "balance" the query and update time for all-pairs shortest-paths problem when nodes of the graph are being removed?

    Read the article

  • VB .Net - Reflection: Reflected Method from a loaded Assembly executes before calling method. Why?

    - by pu.griffin
    When I am loading an Assembly dynamically, then calling a method from it, I appear to be getting the method from Assembly executing before the code in the method that is calling it. It does not appear to be executing in a Serial manner as I would expect. Can anyone shine some light on why this might be happening. Below is some code to illustrate what I am seeing, the code from the some.dll assembly calls a method named PerformLookup. For testing I put a similar MessageBox type output with "PerformLookup Time: " as the text. What I end up seeing is: First: "PerformLookup Time: 40:842" Second: "initIndex Time: 45:873" Imports System Imports System.Data Imports System.IO Imports Microsoft.VisualBasic.Strings Imports System.Reflection Public Class Class1 Public Function initIndex(indexTable as System.Collections.Hashtable) As System.Data.DataSet Dim writeCode As String MessageBox.Show("initIndex Time: " & Date.Now.Second.ToString() & ":" & Date.Now.Millisecond.ToString()) System.Threading.Thread.Sleep(5000) writeCode = RefreshList() End Function Public Function RefreshList() As String Dim asm As System.Reflection.Assembly Dim t As Type() Dim ty As Type Dim m As MethodInfo() Dim mm As MethodInfo Dim retString as String retString = "" Try asm = System.Reflection.Assembly.LoadFrom("C:\Program Files\some.dll") t = asm.GetTypes() ty = asm.GetType(t(28).FullName) 'known class location m = ty.GetMethods() mm = ty.GetMethod("PerformLookup") Dim o as Object o = Activator.CreateInstance(ty) Dim oo as Object() retString = mm.Invoke(o,Nothing).ToString() Catch Ex As Exception End Try return retString End Function End Class

    Read the article

  • Communication between EJB3 Instances (JEE inter-bean communication) possible?

    - by Hank
    I'm designing a part of a JEE6 application, consisting of EJB3 beans. Part of the requirements are multiple parallel (say a few hundred) long running (over days) database hunts. Individual hunts have different search parameters (start time, end time, query filter). Parameters may get changed over time. Currently I'm thinking of the following: SearchController (Stateless Session Bean) formulates a set of search parameters, sends it off to a SearchListener via JMS SearchListener (Message Driven Bean) receives search parameters, instantiates a SearchWorker with the parameters SearchWorker (SLSB) hunts repeatedly through the database; when it finds something, the result is sent off via JMS, and the search continues; when the given 'end-time' has reached, it ends What I'm wondering now: Is there a problem, with EJB3 instances running for days? (Other than that I need to be able to deal with container restarts...) How do I know how many and which EJB instances of SearchWorker are currently running? Is it possible to communicate with them individually (similar to sending a System V signal to a unix process), e.g. to send new parameters, to end an instance, etc..

    Read the article

  • General many-to-many relationship problem ( Postgresql )

    - by David
    Hi, i have two tables: CREATE TABLE "public"."auctions" ( "id" VARCHAR(255) NOT NULL, "auction_value_key" VARCHAR(255) NOT NULL, "ctime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, "mtime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, CONSTRAINT "pk_XXXX2" PRIMARY KEY("id"), ); and CREATE TABLE "public"."auction_values" ( "id" NUMERIC DEFAULT nextval('default_seq'::regclass) NOT NULL, "fk_auction_value_key" VARCHAR(255) NOT NULL, "key" VARCHAR(255) NOT NULL, "value" TEXT, "ctime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, "mtime" TIMESTAMP WITHOUT TIME ZONE NOT NULL, CONSTRAINT "pk_XXXX1" PRIMARY KEY("id"), ); if i want to create a many-to-many relationship on the auction_value_key like this: ALTER TABLE "public"."auction_values" ADD CONSTRAINT "auction_values_fk" FOREIGN KEY ("fk_auction_value_key") REFERENCES "public"."auctions"("auction_value_key") ON DELETE NO ACTION ON UPDATE NO ACTION NOT DEFERRABLE; i get this SQL error: ERROR: there is no unique constraint matching given keys for referenced table "auctions" Question: As you might see, i want "auction_values" to be be "reused" by different auctions without duplicating them for every auction... So i don't want a key relation on the "id" field in the auctions table... Am i thinking wrong here or what is the deal? ;) Thanks

    Read the article

  • A more elegant way to start a multithread alarm in Rebol VID ? (What's the equivalent of load event?

    - by Rebol Tutorial
    I want to trigger an alarm when the GUI starts. I can't see what's the equivalent of load event of other language in Rebol VID, so I put it in the periodic handler which is quite circumvoluted. So how to do this more cleanly ? alarm-data: none set-alarm: func [ "Set alarm for future time." seconds "Seconds from now to ring alarm." message [string! unset!] "Message to print on alarm." ] [ alarm-data: reduce [now/time + seconds message] ] ring: func [ "Action for when alarm comes due." message [string! unset!] ] [ set-face monitor either message [message]["RIIIING!"] ; Your sound playing can also go here (my computer doesn't have speakers). ] periodic: func [ "Called every second, checks alarms." fact action event ] [ either alarm-data [ ; Update alarm countdown. set-face monitor rejoin [ "Alarm will ring in " to integer! alarm-data/1 - now/time " seconds." ] ; Check alarm. if now/time > alarm-data/1 [ ring alarm-data/2 ;alarm-data: none ; Reset once fired. ] ][ either value? 'message [ set-alarm seconds message ][ set-alarm seconds "Alarm triggered!" ] ] ] alarm: func[seconds message [string! unset!]][ system/words/seconds: seconds if value? 'message [ system/words/message: message ] view layout [ monitor: text 256 "" rate 1 feel [engage: :periodic] button 256 "re/start countdown" [ either value? 'message [ set-alarm seconds message ][ set-alarm seconds "Alarm triggered!" ] set-face monitor "Alarm set." ] ] ]

    Read the article

  • Extracting note onset from MIDI

    - by Dolphin
    Hi I need to extract musical features (note details-pitch, duration, rhythm, loudness, note start time) from a polyphonic (having 2 scores for treble and bass - bass may also have chords) MIDI file. I'm using the jMusic API to extract these details from a MIDI file. My approach is to go through each score, into parts, then phrases and finally notes and extract the details. With my approach, it's reading all the treble notes first and then the bass notes - but chords are not captured (i.e. only a single note of the chord is taken), and I cannot identify from which point onwards are the bass notes. So what I tried was to get the note onsets (i.e. the start time of note being played) - since the starting time of both the treble and bass notes at the start of the piece should be same - But I cannot extract the note onset using jMusic API. Each time it shows 0.0. Is there any way I can identify the voice (treble or bass) of a note? And also all the notes of a chord? How is the voice or note onset for each note stored in MIDI? Is this different for each MIDI file? Any insight is greatly appreciated. Thanks in advance

    Read the article

  • Problem with fork exec kill when redirecting output in perl

    - by Edu
    I created a script in perl to run programs with a timeout. If the program being executed takes longer then the timeout than the script kills this program and returns the message "TIMEOUT". The script worked quite well until I decided to redirect the output of the executed program. When the stdout and stderr are being redirected, the program executed by the script is not being killed because it has a pid different than the one I got from fork. It seems perl executes a shell that executes my program in the case of redirection. I would like to have the output redirection but still be able to kill the program in the case of a timeout. Any ideas on how I could do that? A simplified code of my script is: #!/usr/bin/perl use strict; use warnings; use POSIX ":sys_wait_h"; my $timeout = 5; my $cmd = "very_long_program 1>&2 > out.txt"; my $pid = fork(); if( $pid == 0 ) { exec($cmd) or print STDERR "Couldn't exec '$cmd': $!"; exit(2); } my $time = 0; my $kid = waitpid($pid, WNOHANG); while ( $kid == 0 ) { sleep(1); $time ++; $kid = waitpid($pid, WNOHANG); print "Waited $time sec, result $kid\n"; if ($timeout > 0 && $time > $timeout) { print "TIMEOUT!\n"; #Kill process kill 9, $pid; exit(3); } } if ( $kid == -1) { print "Process did not exist\n"; exit(4); } print "Process exited with return code $?\n"; exit($?); Thanks for any help.

    Read the article

  • Oracle syntax - should we have to choose between the old and the new?

    - by Martin Milan
    Hi, I work on a code base in the region of about 1'000'000 lines of source, in a team of around eight developers. Our code is basically an application using an Oracle database, but the code has evolved over time (we have plenty of source code from the mid nineties in there!). A dispute has arisen amongst the team over the syntax that we are using for querying the Oracle database. At the moment, the overwhelming majority of our queries use the "old" Oracle Syntax for joins, meaning we have code that looks like this... Example of Inner Join select customers.*, orders.date, orders.value from customers, orders where customers.custid = orders.custid Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers, contacts where customers.custid = contacts.custid(+) As new developers have joined the team, we have noticed that some of them seem to prefer using SQL-92 queries, like this: Example of Inner Join select customers.*, orders.date, orders.value from customers inner join orders on (customers.custid = orders.custid) Example of Outer Join select customers.custid, contacts.ContactName, contacts.ContactTelNo from customers left join contacts on (customers.custid = contacts.custid) Group A say that everyone should be using the the "old" syntax - we have lots of code in this format, and we ought to value consistency. We don't have time to go all the way through the code now rewriting database queries, and it wouldn't pay us if we had. They also point out that "this is the way we've always done it, and we're comfortable with it..." Group B however say that they agree that we don't have the time to go back and change existing queries, we really ought to be adopting the "new" syntax on code that we write from here on in. They say that developers only really look at a single query at a time, and that so long as developers know both syntax there is nothing to be gained from rigidly sticking to the old syntax, which might be deprecated at some point in the future. Without declaring with which group my loyalties lie, I am interested in hearing the opinions of impartial observers - so let the games commence! Martin. Ps. I've made this a community wiki so as not to be seen as just blatantly chasing after question points...

    Read the article

  • What does O(log n) mean exactly?

    - by Andreas Grech
    I am currently learning about Big O Notation running times and amortized times. I understand the notion of O(n) linear time, meaning that the size of the input affects the growth of the algorithm proportionally...and the same goes for, for example, quadratic time O(n2) etc..even algorithms, such as permutation generators, with O(n!) times, that grow by factorials. For example, the following function is O(n) because the algorithm grows in proportion to its input n: f(int n) { int i; for (i = 0; i < n; ++i) printf("%d", i); } Similarly, if there was a nested loop, the time would be O(n2). But what exactly is O(log n)? For example, what does it mean to say that the height of a complete binary tree is O(log n)? I do know (maybe not in great detail) what Logarithm is, in the sense that: log10 100 = 2, but I cannot understand how to identify a function with a logarithmic time.

    Read the article

  • How to store a list in a column of a database table.

    - by John Berryman
    Howdy! So, per Mehrdad's answer to a related question, I get it that a "proper" database table column doesn't store a list. Rather, you should create another table that effectively holds the elements of said list and then link to it directly or through a junction table. However, the type of list I want to create will be composed of unique items (unlike the linked question's fruit example). Furthermore, the items in my list are explicitly sorted - which means that if I stored the elements in another table, I'd have to sort them every time I accessed them. Finally, the list is basically atomic in that any time I wish to access the list, I will want to access the entire list rather than just a piece of it - so it seems silly to have to issue a database query to gather together pieces of the list. AKX's solution (linked above) is to serialize the list and store it in a binary column. But this also seems inconvenient because it means that I have to worry about serialization and deserialization. Is there any better solution? If there is no better solution, then why? It seems that this problem should come up from time to time. ... just a little more info to let you know where I'm coming from. As soon as I had just begun understanding SQL and databases in general, I was turned on to LINQ to SQL, and so now I'm a little spoiled because I expect to deal with my programming object model without having to think about how the objects are queried or stored in the database. Thanks All! John

    Read the article

  • how to bind the converted value from a datatable to my gridview.

    - by Ranjana
    how to bind the value to gridview i have a datatable DataTable dtBindGrid = new DataTable(); dtBindGrid = serviceobj.SelectExamTimeTable(txtSchoolName.Text, txtBranchName.Text, txtClass.Text, txtExamName.Text); foreach (DataRow row in dtBindGrid.Rows) { strgetday= row["Day"].ToString(); strgetdate = row["Date"].ToString(); DatedTime = Convert.ToDateTime(row["Time"].ToString()); strgettime = DatedTime.ToString("t"); strgetsubject = row["Subject"].ToString(); strgetduration = row["Duration"].ToString(); strgetsession = row["Session"].ToString(); strgetminmark = row["MinMarks"].ToString(); strgetmaxmark = row["MaxMarks"].ToString(); // dtBindGrid.Rows.Add(row); } GrdExamTimeTable.DataSource = dtBindGrid.DefaultView; GrdExamTimeTable.DataBind(); this datatable will return me some values like day,date,time,duration,subject,.. here im getting each value in string bec to convert the Time as 9.00am or 9.00pm DatedTime = Convert.ToDateTime(row["Time"].ToString()); strgettime = DatedTime.ToString("t"); .... how to bind this converted value to my gridview.

    Read the article

  • Quartz 2D or OpenGL ES? Pros and cons in the long term, possibility of migration to other platforms.

    - by fspirit
    Hi all! I'm having a hard time deciding whether to go with Quartz2D or OpenGL for an iPad game. It will be 2D mostly, but effect-intense (simultaneous lighting effects for 10-30 objects, 10-20 simultaneous animations on the screen). So far, assuming i'm equally dumb in both technologies and have to learn them from the ground, i came to this list. (I've read several topics here, on SO, with names like "Quartz or OpenGL", but i'm still left with some questions) Quartz: Better time-to-market, because of ready to use absractions like UIView, UIImageView, CoreAnimation abstractions Open GL ES Closer to hardware, thus, performance is better. App, implemented with OpenGL ES can be easier migrated to Android, MeeGo, Windows Phone, etc. My questions are: How time will it take to rewrite Quartz 2d app to use OpenGL? Lets say it took me 2 man-month to write Quartz app, how much time will i need to rewrite it? (Please, just some subjective opinions, i'll try to summarize them somehow) Regarding the ease of migration to other platforms, when using OpenGL, is it really so? Or efforts when migrating Quartz app from iPhoneOS to Android will be not so much bigger, compared to OpenGL app migration? (Ease of migration is quite important criterion) Regarding OpenGL, should i go with OpenGL 1.1 or 2.0, concerning migration? (Android supports 2.0 through NDK, but dont know whether NDK's use will increase or decrease migration efforts)

    Read the article

  • Problem with load testing Web Service - VSTS 2008

    - by Carlos
    Hello, I have a webtest with makes a simple call to a WebService which looks like that: MyWebService webService = new MyWebService(); webService.Timeout = 180000; webService.myMethod(); I am not using ThinkTimes, also the Run Duration is set to 5 minutes. When I ran this test simulating only 1 user, I check the counters and I found something like that: Tests Total: 4500 Network Interface\Bytes sent (agent machine): 35,500 Then I ran the same tests, but this time simulating 2 users and I got something like that: Tests Total: 2225 Network Interface\Bytes sent (agent machine): 30,500 So when I increased the numbers of users the tests/sec was half than when I use only 1 user and the bytes sent by the agent was also lower. I think it is strange, because it doesn't seems I have a bottleneck in my agent machine since CPU is never higher than 30% and I have over 1.5GB of RAM free, also my network utilization is like 0.5% of its capacity. In order to troubleshot this I ran a test using Step Pattern, the simulated users went from 20 to 800 users. When I check the requests/sec it is practically constant through the whole test, so it is clear there is something in my test or my environment which is preventing the number of requests from gets higher. It would be a expected behavior if the "response time" was getting higher because it would tell me the requests wasn't been processed properly, but the strange thing is the response time is practically constant all the time and it is pretty low actually. I have no idea why my agent can't send more requests when I increase the numbers of users, any help/tip/guess would be really appreciate.

    Read the article

  • How to merge duplicates in 2D python arrays

    - by Wei Lou
    Hi, I have a set of data similar to this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3_1 RTT(Avr):620ms 3 13:19:12.754 Ping3_2 RTT(Avr):210ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms ... when i parse through it, i need merge the results of Ping3_1 and Ping3_2. Then take average of those two row export as one row. So the end of result would be like this: No Start Time End Time CallType Info 1 13:14:37.236 13:14:53.700 Ping1 RTT(Avr):160ms 2 13:14:58.955 13:15:29.984 Ping2 RTT(Avr):40ms 3 13:19:12.754 13:19:14.757 Ping3 RTT(Avr):415ms 4 13:14:58.955 13:15:29.984 Ping4 RTT(Avr):360ms 5 13:19:12.754 13:19:14.757 Ping1 RTT(Avr):40ms 6 13:19:59.862 13:20:01.522 Ping2 RTT(Avr):163ms currently i am concatenating column 0 and 1 to make a unique key, find duplication there then doing rest of special treatment for those parallel Pings. It is not elegant at all. Just wonder what is the better way to do it. Thanks!

    Read the article

  • how to prevent divs from overlapping when using no-wrap across table cells

    - by pedalpete
    I'm trying to create an events calendar which has a somewhat 'gantt' chart like bar representing the times of an event, along with the time listed. I've uploaded a page showing the problem As you can see, the problem is that in the 3rd cell, the event div is being overlapped by the previous event div which goes outside the boundary of the cell. The event from the 2nd cell SHOULD expand beyond the cell border, but what I'm trying to get to happen is that WHEN the event from the 2nd cell expands into the 3rd cell, the event on the 3rd cell starts a new line. I believe the problem exists because of the nowrap on the time, but I want the time and in some cases the bar to cross the cell boundary when the time goes from one day to another. All that works now, but I'm having a problem with events overlapping when an event goes from one day to another. I've tried all sorts of :even cells having a different padding, etc. but all these solutions seem to bring up more problems. Is there a way get the no-wrap to force a new long when it crosses into the next cell? or any other solution to this issue?

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >