Search Results

Search found 4893 results on 196 pages for 'expect'.

Page 157/196 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • Reduce durability in MySQL for performance

    - by Paul Prescod
    My site occasionally has fairly predictable bursts of traffic that increase the throughput by 100 times more than normal. For example, we are going to be featured on a television show, and I expect in the hour after the show, I'll get more than 100 times more traffic than normal. My understanding is that MySQL (InnoDB) generally keeps my data in a bunch of different places: RAM Buffers commitlog binary log actual tables All of the above places on my DB slave This is too much "durability" given that I'm on an EC2 node and most of the stuff goes across the same network pipe (file systems are network attached). Plus the drives are just slow. The data is not high value and I'd rather take a small chance of a few minutes of data loss rather than have a high probability of an outage when the crowd arrives. During these traffic bursts I would like to do all of that I/O only if I can afford it. I'd like to just keep as much in RAM as possible (I have a fair chunk of RAM compared to the data size that would be touched over an hour). If buffers get scarce, or the I/O channel is not too overloaded, then sure, I'd like things to go to the commitlog or binary log to be sent to the slave. If, and only if, the I/O channel is not overloaded, I'd like to write back to the actual tables. In other words, I'd like MySQL/InnoDB to use a "write back" cache algorithm rather than a "write through" cache algorithm. Can I convince it to do that? If this is not possible, I am interested in general MySQL write-performance optimization tips. Most of the docs are about optimizing read performance, but when I get a crowd of users, I am creating accounts for all of them, so that's a write-heavy workload.

    Read the article

  • How to handle array element between int and Integer

    - by masato-san
    First, it is long post so if you need clarification please let me know. I'm new to Java and having difficulty deciding whether I should use int[] or Integer[]. I wrote a function that find odd_number from int[] array. public int[] find_odd(int[] arr) { int[] result = new int[arr.length]; for(int i=0; i<arr.length; i++) { if(arr[i] % 2 != 0) { //System.out.println(arr[i]); result[i] = arr[i]; } } return result; } Then, when I pass the int[] array consisting of some integer like below: int[] myArray = {-1, 0, 1, 2, 3}; int[] result = find_odd(myArray); The array "result" contains: 0, -1, 0, 1, 0, 3 Because in Java you have to define the size of array first, and empty int[] array element is 0 not null. So when I want to test the find_odd() function and expect the array to have odd numbers (which it does) only, it throws the error because the array also includes 0s representing "empty cell" as shown above. My test code: public void testFindOddPassValidIntArray() { int[] arr = {-2, -1, 0, 1, 3}; int[] result = findOddObj.find_odd(arr); //check if return array only contain odd number for(int i=0; i<result.length; i++) { if(result[i] != null) { assert is_odd(result[i]) : result[i]; } } } So, my question is should I use int[] array and add check for 0 element in my test like: if(result[i] != 0) { assert is_odd(result[i] : result[i] } But in this case, if find_odd() function is broken and returning 0 by miscalculation, I can't catch it because my test code would only assume that 0 is empty cell. OR should I use Integer[] array whose default is null? How do people deal with this kind of situation?

    Read the article

  • Submit a form and get a JSON response with jQuery

    - by Leopd
    I expect this is easy, but I'm not finding a simple explanation anywhere of how to do this. I have a standard HTML form like this: <form name="new_post" action="process_form.json" method=POST> <label>Title:</label> <input id="post_title" name="post.title" type="text" /><br/> <label>Name:</label><br/> <input id="post_name" name="post.name" type="text" /><br/> <label>Content:</label><br/> <textarea cols="40" id="post_content" name="post.content" rows="20"></textarea> <input id="new_post_submit" type="submit" value="Create" /> </form> I'd like to have javascript (using jQuery) submit the form to the form's action (process_form.json), and receive a JSON response from the server. Then I'll have a javascript function that runs in response to the JSON response, like function form_success(json) { alert('Your form submission worked'); // process json response } How do I wire up the form submit button to call my form_success method when done? Also it should override the browser's own navigation, since I don't want to leave the page. Or should I move the button out of the form to do that?

    Read the article

  • Problems with Threading in Python 2.5, KeyError: 51, Help debugging?

    - by vignesh-k
    I have a python script which runs a particular script large number of times (for monte carlo purpose) and the way I have scripted it is that, I queue up the script the desired number of times it should be run then I spawn threads and each thread runs the script once and again when its done. Once the script in a particular thread is finished, the output is written to a file by accessing a lock (so my guess was that only one thread accesses the lock at a given time). Once the lock is released by one thread, the next thread accesses it and adds its output to the previously written file and rewrites it. I am not facing a problem when the number of iterations is small like 10 or 20 but when its large like 50 or 150, python returns a KeyError: 51 telling me element doesn't exist and the error it points out to is within the lock which puzzles me since only one thread should access the lock at once and I do not expect an error. This is the class I use: class errorclass(threading.Thread): def __init__(self, queue): self.__queue=queue threading.Thread.__init__(self) def run(self): while 1: item = self.__queue.get() if item is None: break result = myfunction() lock = threading.RLock() lock.acquire() ADD entries from current thread to entries in file and REWRITE FILE lock.release() queue = Queue.Queue() for i in range(threads): errorclass(queue).start() for i in range(desired iterations): queue.put(i) for i in range(threads): queue.put(None) Python returns with KeyError: 51 for large number of desired iterations during the adding/write file operation after lock access, I am wondering if this is the correct way to use the lock since every thread has a lock operation rather than every thread accessing a shared lock? What would be the way to rectify this?

    Read the article

  • web service slowdown

    - by user238591
    Hi, I have a web service slowdown. My (web) service is in gsoap & managed C++. It's not IIS/apache hosted, but speaks xml. My client is in .NET The service computation time is light (<0.1s to prepare reply). I expect the service to be smooth, fast and have good availability. I have about 100 clients, response time is 1s mandatory. Clients have about 1 request per minute. Clients are checking web service presence by tcp open port test. So, to avoid possible congestion, I turned gSoap KeepAlive to false. Until there everything runs fine : I bearly see connections in TCPView (sysinternals) New special synchronisation program now calls the service in a loop. It's higher load but everything is processed in less 30 seconds. With sysinternals TCPView, I see that about 1 thousands connections are in TIME_WAIT. They slowdown the service and It takes seconds for the service to reply, now. Could it be that I need to reset the SoapHttpClientProtocol connection ? Someone has TIME_WAIT ghosts with a web service call in a loop ?

    Read the article

  • iOS layout; I'm not getting it

    - by Tbee
    Well, "not getting it" is too harsh; I've got it working in for what for me is a logical setup, but it does not seem to be what iOS deems logical. So I'm not getting something. Suppose I've got an app that shows two pieces of information; a date and a table. According to the MVC approach I've got three MVC at work here, one for the date, one for the table and one that takes both these MCVs and makes it into a screen, wiring them up. The master MVC knows how/where it wants to layout the two sub MVC's. Each detail MVC only takes care of its own childeren within the bounds that were specified by the master MVC. Something like: - (void)loadView { MVC* mvc1 = [[MVC1 alloc] initwithFrame:...] [self.view addSubview:mvc1.view]; MVC* mvc2 = [[MVC2 alloc] initwithFrame:...] [self.view addSubview:mvc2.view]; } If the above is logical (which is it for me) then I would expect any MVC class to have a constructor "initWithFrame". But an MVC does not, only view have this. Why? How would one correctly layout nested MVCs? (Naturally I do not have just these two, but the detail MVCs have sub MVCs again.)

    Read the article

  • Assigning a value to a variable gets stored in the wrong spot?

    - by scribbloid
    Hello, I'm relatively new to C, and this is baffling me right now. It's part of a much larger program, but I've written this little program to depict the problem I'm having. #include <stdio.h> int main() { signed int tcodes[3][1]; tcodes[0][0] = 0; tcodes[0][1] = 1000; tcodes[1][0] = 1000; tcodes[1][1] = 0; tcodes[2][0] = 0; tcodes[2][1] = 1000; tcodes[3][0] = 1000; tcodes[3][1] = 0; int x, y, c; for(c = 0; c <= 3; c++) { printf("%d %d %d\r\n", c, tcodes[c][0], tcodes[c][1]); x = 20; y = 30; } } I'd expect this program to output: 0 0 1000 1 1000 0 2 0 1000 3 1000 0 But instead, I get: 0 0 1000 1 1000 0 2 0 20 3 20 30 It does this for any number assigned to x and y. For some reason x and y are overriding parts of the array in memory. Can someone explain what's going on? Thanks!

    Read the article

  • How to control virtual memory management in linux?

    - by chmike
    I'm writing a program that uses an mmap file to hold a huge buffer organized as an array of 64 MB blocks. The blocks are used to aggregate data received from different hosts through the network. As a consequence the total data size written in each block is not known in advance. Most of the time it is only 2MB but in some cases it can be up to 20MB or more. The data doesn't stay long in the buffer. 90% is deleted after less than a second and the rest is transmitted to another host. I would like to know if there is a way to tell the virtual memory manager that ram pages are not dirty anymore when data is deleted. Should I use mmap and munmap when a block is used and released to control the virtual memory ? What would be the overhead of doing this ? Also, some colleagues expressed concerns about the performance impact of allocating such a big mmap space. I expect it to behave like a swap file so that only dirty pages are to be considered.

    Read the article

  • Retruning reference to object not changing the address in c++

    - by ashish-sangwan
    I am trying to understand function returning reference. For that i have written a simple program :- include using namespace std; class test { int i; friend test& func(); public: test(int j){i=j;} void show(){cout< }; test& func() { test temp(10); return temp; //// Address of temp=0xbfcb2874 } int main() { test obj1(50); // Address of obj1=0xbfcb28a0 func()=obj1; <= Problem:The address of obj1 is not changing obj1.show(); // // Address of obj1=0xbfcb28a0 return 0; } I run the program using gdb and observed that the address of obj1 still remains same but i expect it to get changed to 0xbfcb2874. I am not clear with the concept. Please help. Thanks in advance

    Read the article

  • Is using advanced constructs (function, new, function calls) in JSON safe?

    - by Vilx-
    JSON is a nice way to pass complex data from my server side code to client side JavaScript. For example, in PHP I can write: <script type="text/javascript> var MyComplexVariable = <?= BigFancyObjectGraph.GetJSON() ?>; DoMagic(MyComplexVariable); </script> This is pretty cool, but sometimes you want to pass more than basic date, like dates or even function definitions. There is a simple and straightforward way of doing it too, like: <script type="text/javascript> var MyComplexVariable = { 'SimpleProperty' : 42, 'FunctionProperty' : function() { return 6*7; }, 'DateProperty' : new Date(989539200000), 'ArbitraryProperty' : GetTheMeaningOfLifeUniverseAndEverything() }; DoMagic(MyComplexVariable); </script> And this works like a charm on all browsers I've seen so far. But according to JSON.org such syntax is invalid. On the other hand, I've seen this syntax being used in very many places, including some popular JavaScript frameworks. So... Can I expect any problems if I use "unsupported" JSON features like the above? Why is it wrong or not?

    Read the article

  • Setting up a web development/build environment

    - by Eric
    Hello all, My current project has a development web server and live web server. Developers make changes to files on the dev server and test them (by going to the dev address) and make changes as necessary. When the file or files are ready to go, they are copied to the live server. There is no version control. As you might expect, there are some problems with this model: It's hard to keep track of what other programmers have done. It's hard to keep track of what files should be copied to the live server. There is no version control. I'm in a position to make nearly any change I like, but I want it to be the right one! I have been turning this over in my head for quite a while, and I have a solution that might be okay. But I want SO's opinion. Certainly version control needs to be added. But how should it work with the existing codebase and where should the developers be testing? How can anyone know what needs to be moved to the live server? What other details need to be addressed? How would you attack this problem? Supplementary information: The website is vital, but not mission critical. A small amount of downtime is acceptable. There are very few developers. (Right now, only 4.) History: Before I started, the project used Visual Source Safe. This was a sufficiently bad experience that they quit using it and abandoned version control. The project is an ASP.NET (C#) website. This seems like a question that may have a complicated answer. Thanks for thinking about it!

    Read the article

  • Returning reference to object is not changing the address in c++

    - by ashish-sangwan
    I am trying to understand functions returning a reference. For that I have written a simple program: #include<iostream> using namespace std; class test { int i; friend test& func(); public: test(int j){i=j;} void show(){cout<<i<<endl;} }; test& func() { test temp(10); return temp; //// Address of temp=0xbfcb2874 } int main() { test obj1(50); // Address of obj1=0xbfcb28a0 func()=obj1; <= Problem:The address of obj1 is not changing obj1.show(); // // Address of obj1=0xbfcb28a0 return 0; } I ran the program using gdb and observed that the address of obj1 still remains same, but I expect it to get changed to 0xbfcb2874. I am not clear with the concept. Please help.

    Read the article

  • Running processes at different times stops events from working - C

    - by Jamie Keeling
    Hello, This is a question which follows on from my previously answered question here At first I assumed I had a problem with the way I was creating my events due to the handles for OpenEvent returning NULL, I have managed to find the real cause however I am not sure how to go about it. Basically I use Visual Studio to launch both Process A and B at the same time, in the past my OpenEvent handle wouldn't work due to Process A looking for the address of the event a fraction of a second before Process B had time to make it. My solution was to simply allow Process B to run before Process A, fixing the error. The problem I have now is that Process B now reads events from Process A and as you expect it too returns a null handle when trying to open the events from Process A. I am creating the events in WM_CREATE message of both processes, furthermore I also create a thread at the same time to open/read/act upon the events. It seems if I run them at the same time they don't get chance to see each other, alternatively if I run one before the other one of them misses out and can't open a Handle. Can anyone suggest a solution? Thanks.

    Read the article

  • java number exceeds long.max_value - how to detect?

    - by jurchiks
    I'm having problems detecting if a sum/multiplication of two numbers exceeds the maximum value of a long integer. Example code: long a = 2 * Long.MAX_VALUE; System.out.println("long.max * smth > long.max... or is it? a=" + a); This gives me -2, while I would expect it to throw a NumberFormatException... Is there a simple way of making this work? Because I have some code that does multiplications in nested IF blocks or additions in a loop and I would hate to add more IFs to each IF or inside the loop. Edit: oh well, it seems that this answer from another question is the most appropriate for what I need: http://stackoverflow.com/a/9057367/540394 I don't want to do boxing/unboxing as it adds unnecassary overhead, and this way is very short, which is a huge plus to me. I'll just write two short functions to do these checks and return the min or max long. Edit2: here's the function for limiting a long to its min/max value according to the answer I linked to above: /** * @param a : one of the two numbers added/multiplied * @param b : the other of the two numbers * @param c : the result of the addition/multiplication * @return the minimum or maximum value of a long integer if addition/multiplication of a and b is less than Long.MIN_VALUE or more than Long.MAX_VALUE */ public static long limitLong(long a, long b, long c) { return (((a > 0) && (b > 0) && (c <= 0)) ? Long.MAX_VALUE : (((a < 0) && (b < 0) && (c >= 0)) ? Long.MIN_VALUE : c)); } Tell me if you think this is wrong.

    Read the article

  • World Economic Crisis. IT prospects

    - by Andrew Florko
    There was alike question in 2008, 2 years passed. Please, share your expectations about IT market and employment in the next year or two (or so far you can predict). IMHO Russia (my native country) fully met Crisis in spring, 2008. Stock markets shrank 3(!) times during half a year. Many developers were fired those days but I suppose just because business was shocked and freezed some projects. Developers expected +20% salary growth per year in 2004-2007 (Developer salary in Moscow was about 2-3K$ in early 2008). Then there was 30% (very subjective) salary cut-off in 2008 and salaries were frozen till 2009. Now things are slowly coming back to 2008. Looking in the future I expect pessimistic scenario and another crash. Our economic depends more and more on oil & gas every year. IT that serves industry will be shrinked because we can't compete to China in real production. Due to high currency board (rubble is strong compared to dollar) we can't rely on offshore programming. Our officials are concerned on innovative economic breakthrough but it's an ordinary budget money assignemtn in practice. I don't believe in innovations either because who require innovations if you have debts and tomorrow is vapor?

    Read the article

  • Ruby on Rails controller and architecture with cells

    - by dt
    I decided to try to use the cells plugin from rails: http://cells.rubyforge.org/community.html given that I'm new to Ruby and very used to thinking in terms of components. Since I'm developing the app piecemeal and then putting it together piece by piece, it makes sense to think in terms of components. So, I've been able to get cells working properly inside a single view, which calls a partial. Now, what I would like to be able to do (however, maybe my instincts need to be redirected to be more "Rails-y"), is call a single cell controller and use the parameters to render one output vs. another. Basically, if there were a controller like: def index params[:responsetype] end def processListResponse end def processSearchResponse end And I have two different controller methods that I want to respond to based on the params response type, where I have a single template on the front end and want the inner "component" to render differently depending on what type of request is made. That allows me to reuse the same front-end code. I suppose I could do this with an ajax call instead and just have it rerender the component on the front end, but it would be nice to have the option to do it either way and to understand how to architect Rails a bit better in the process. It seems like there should be a "render" option from within the cells framework to render to a certain controller or view, but it's not working like I expect and I don't know if I'm even in the ballpark. Thanks!

    Read the article

  • Nhibernate Fluent domain Object with Id(x => x.id).GeneratedBy.Assigned not saveable

    - by urpcor
    Hi there, I am using for some legacy db the corresponding domainclasses with mappings. Now the Ids of the entities are calculated by some stored Procedure in the DB which gives back the Id for the new row.(Its legacy, I cant change this) Now I create the new entity , set the Id and Call Save. But nothing happens. no exeption. Even NH Profiler does not say a bit. its as the Save call does nothing. I expect that NH thinks that the record is already in the db because its got an Id already. But I am using Id(x = x.id).GeneratedBy.Assigned() and intetionally the Session.Save(object) method. I am confused. I saw so many samples there it worked. does any body have any ideas about it? public class Appendix { public virtual int id { get; set; } public virtual AppendixHierarchy AppendixHierachy { get; set; } public virtual byte[] appendix { get; set; } } public class AppendixMap : ClassMap<Appendix> { public AppendixMap () { WithTable("appendix"); Id(x => x.id).GeneratedBy.Assigned(); References(x => x.AppendixHierachy).ColumnName("appendixHierarchyId"); Map(x => x.appendix); } }

    Read the article

  • Best practices for (over)using Azure queues

    - by John
    Hi, I'm in the early phases of designing an Azure-based application. One of the things that attracts me to Azure is the scalability, given the variability of the demand I'm likely to expect. As such I'm trying to keep things loosely coupled so I can add instances when I need to. The recommendations I've seen for architecting an application for Azure include keeping web role logic to a minimum, and having processing done in worker roles, using queues to communicate and some sort of back-end store like SQL Azure or Azure Tables. This seems like a good idea to me as I can scale up either or both parts of the application without any issue. However I'm curious if there are any best practices (or if anyone has any experiences) for when it's best to just have the web role talk directly to the data store vs. sending data by the queue? I'm thinking of the case where I have a simple insert to do from the web role - while I could set this up as a message, send it on the queue, and have a worker role pick it up and do the insert, it seems like a lot of double-handling. However I also appreciate that it may be the case that this is better in the long run, in case the web role gets overwhelmed or more complex logic ends up being required for the insert. I realise this might be a case where the answer is "it depends entirely on the situation, check your perf metrics" - but if anyone has any thoughts I'd be very appreciative! Thanks John

    Read the article

  • Eliminate full table scan due to BETWEEN (and GROUP BY)

    - by Dave Jarvis
    Description According to the explain command, there is a range that is causing a query to perform a full table scan (160k rows). How do I keep the range condition and reduce the scanning? I expect the culprit to be: Y.YEAR BETWEEN 1900 AND 2009 AND Code Here is the code that has the range condition (the STATION_DISTRICT is likely superfluous). SELECT COUNT(1) as MEASUREMENTS, AVG(D.AMOUNT) as AMOUNT, Y.YEAR as YEAR, MAKEDATE(Y.YEAR,1) as AMOUNT_DATE FROM CITY C, STATION S, STATION_DISTRICT SD, YEAR_REF Y FORCE INDEX(YEAR_IDX), MONTH_REF M, DAILY D WHERE -- For a specific city ... -- C.ID = 10663 AND -- Find all the stations within a specific unit radius ... -- 6371.009 * SQRT( POW(RADIANS(C.LATITUDE_DECIMAL - S.LATITUDE_DECIMAL), 2) + (COS(RADIANS(C.LATITUDE_DECIMAL + S.LATITUDE_DECIMAL) / 2) * POW(RADIANS(C.LONGITUDE_DECIMAL - S.LONGITUDE_DECIMAL), 2)) ) <= 50 AND -- Get the station district identification for the matching station. -- S.STATION_DISTRICT_ID = SD.ID AND -- Gather all known years for that station ... -- Y.STATION_DISTRICT_ID = SD.ID AND -- The data before 1900 is shaky; insufficient after 2009. -- Y.YEAR BETWEEN 1900 AND 2009 AND -- Filtered by all known months ... -- M.YEAR_REF_ID = Y.ID AND -- Whittled down by category ... -- M.CATEGORY_ID = '003' AND -- Into the valid daily climate data. -- M.ID = D.MONTH_REF_ID AND D.DAILY_FLAG_ID <> 'M' GROUP BY Y.YEAR Update The SQL is performing a full table scan, which results in MySQL performing a "copy to tmp table", as shown here: +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ | 1 | SIMPLE | C | const | PRIMARY | PRIMARY | 4 | const | 1 | | | 1 | SIMPLE | Y | range | YEAR_IDX | YEAR_IDX | 4 | NULL | 160422 | Using where | | 1 | SIMPLE | SD | eq_ref | PRIMARY | PRIMARY | 4 | climate.Y.STATION_DISTRICT_ID | 1 | Using index | | 1 | SIMPLE | S | eq_ref | PRIMARY | PRIMARY | 4 | climate.SD.ID | 1 | Using where | | 1 | SIMPLE | M | ref | PRIMARY,YEAR_REF_IDX,CATEGORY_IDX | YEAR_REF_IDX | 8 | climate.Y.ID | 54 | Using where | | 1 | SIMPLE | D | ref | INDEX | INDEX | 8 | climate.M.ID | 11 | Using where | +----+-------------+-------+--------+-----------------------------------+--------------+---------+-------------------------------+--------+-------------+ Related http://dev.mysql.com/doc/refman/5.0/en/how-to-avoid-table-scan.html http://dev.mysql.com/doc/refman/5.0/en/where-optimizations.html http://stackoverflow.com/questions/557425/optimize-sql-that-uses-between-clause Thank you!

    Read the article

  • Can a Snapshot transaction fail and only partially commit in a TransactionScope?

    - by Travis Brooks
    Greetings I stumbled onto a problem today that seems sort of impossible to me, but its happening...I'm calling some database code in c# that looks something like this: using(var tran = MyDataLayer.Transaction()) { MyDataLayer.ExecSproc(new SprocTheFirst(arg1, arg2)); MyDataLayer.CallSomethingThatEventuallyDoesLinqToSql(arg1, argEtc); tran.Commit(); } I've simplified this a bit for posting, but whats going on is MyDataLayer.Transaction() makes a TransactionScope with the IsolationLevel set to Snapshot and TransactionScopeOption set to Required. This code gets called hundreds of times a day, and almost always works perfectly. However after reviewing some data I discovered there are a handful of records created by "SprocTheFirst" but no corresponding data from "CallSomethingThatEventuallyDoesLinqToSql". The only way that records should exist in the tables I'm looking at is from SprocTheFirst, and its only ever called in this one function, so if its called and succeeded then I would expect CallSomethingThatEventuallyDoesLinqToSql would get called and succeed because its all in the same TransactionScope. Its theoretically possible that some other dev mucked around in the DB, but I don't think they have. We also log all exceptions, and I can find nothing unusual happening around the time that the records from SprocTheFirst were created. So, is it possible that a transaction, or more properly a declarative TransactionScope, with Snapshot isolation level can fail somehow and only partially commit?

    Read the article

  • R plotting multiple histograms on single plot to .pdf as a part of R batch script

    - by Bryce Thomas
    I am writing R scripts which play just a small role in a chain of commands I am executing from a terminal. Basically, I do much of my data manipulation in a Python script and then pipe the output to my R script for plotting. So, from the terminal I execute commands which look something like $python whatever.py | R CMD BATCH do_some_plotting.R. This workflow has been working well for me so far, though I have now reached a point where I want to overlay multiple histograms on the same plot, inspired by this answer to another user's question on Stackoverflow. Inside my R script, my plotting code looks like this: pdf("my_output.pdf") plot(hist(d$original,breaks="FD",prob=TRUE), col=rgb(0,0,1,1/4),xlim=c(0,4000),main="original - This plot is in beta") plot(hist(d$minus_thirty_minutes,breaks="FD",prob=TRUE), col=rgb(1,0,0,1/4),add=T,xlim=c(0,4000),main="minus_thirty_minutes - This plot is in beta") Notably, I am using add=T, which is presumably meant to specify that the second plot should be overlaid on top of the first. When my script has finished, the result I am getting is not two histograms overlaid on top of each other, but rather a 3-page PDF whose 3 individual plots contain the titles: i) Histogram of d$original ii) original - This plot is in beta iii) Histogram of d$minus_thirty_minutes So there's two points here I'm looking to clarify. Firstly, even if the plots weren't overlaid, I would expect just a 2-page PDF, not a 3-page PDF. Can someone explain why I am getting a 3-page PDF? Secondly, is there a correction I can make here somewhere to get just the two histograms plotted, and both of them on the same plot (i.e. 1-page PDF)? The other Stackoverflow question/answer I linked to in the first paragraph did mention that alpha-blending isn't supported on all devices, and so I'm curious whether this has anything to do with it. Either way, it would be good to know if there is a R-based solution to my problem or whether I'm going to have to pipe my data into a different language/plotting engine.

    Read the article

  • "Forced constness" in std::map<std::vector<int>,double> > ?

    - by Peter Jansson
    Consider this program: #include <map> #include <vector> typedef std::vector<int> IntVector; typedef std::map<IntVector,double> Map; void foo(Map& m,const IntVector& v) { Map::iterator i = m.find(v); i->first.push_back(10); }; int main() { Map m; IntVector v(10,10); foo(m,v); return 0; } Using g++ 4.4.0, I get his compilation error: test.cpp: In function 'void foo(Map&, const IntVector&)': test.cpp:8: error: passing 'const std::vector<int, std::allocator<int> >' as 'this' argument of 'void std::vector<_Tp, _Alloc>::push_back(const _Tp&) [with _Tp = int, _Alloc = std::allocator<int>]' discards qualifiers I would expect this error if I was using Map::const_iterator inside foo but not using a non-const iterator. What am I missing, why do I get this error?

    Read the article

  • Why are these lines being skipped? (java)

    - by David
    Here's the relevant bit of the source code: class Dice { String name ; int x ; int[] sum ; ... public Dice (String name) { this.name = name ; this.x = 0 ; this.sum = new int[7] ; } ... public static void main (String[] arg) { Dice a1 = new Dice ("a1") ; printValues (a1) ; } public static void printDice (Dice Dice) { System.out.println (Dice.name) ; System.out.println ("value: "+Dice.x) ; printValues (Dice) ; } public static void printValues (Dice Dice) { for (int i = 0; i<Dice.sum.length; i++) System.out.println ("#of "+i+"'s: "+Dice.sum[i]) ; } } Here is the output: #of 0's: 0 #of 1's: 0 #of 2's: 0 #of 3's: 0 #of 4's: 0 #of 5's: 0 #of 6's: 0 Why didn't these two lines execute inside printDice: System.out.println (Dice.name) ; System.out.println ("value: "+Dice.x) ; if they had then i would expect to see "a1" and "Value: 0" printed at the top of the rows of #of's

    Read the article

  • How can I determine when an InnoDB table was last changed?

    - by David M
    I've had success in the past storing the (heavily) processed results of a database query in memcached, using the last update time of the underlying tables(s) as part of the cache key. For MyISAM tables, that last changed time is available in SHOW TABLE STATUS. Unfortunately, that's usually NULL for InnoDB tables. In MySQL 4.1, the ctime for an InnoDB in its SHOW TABLE STATUS line was usually its actual last update time, but that doesn't seem to be true for MySQL 5.1. There is a DATETIME field in the table, but it only shows when a row has been modified - it cannot show the deletion time of a row that's not there anymore! So, I really cannot use MAX(update_time). Here's the really tricky part. I have a number of replicas that I do reads from. Can I figure out the state of the table that doesn't rely on when the changes have actually been applied? My conclusion after working on this for a while is that it's not going to be possible to get this information as cheaply as I'd like. I'm probably going to cache data until the time that I expect the table to change (it's updated once a day), and let the query cache help out where it can.

    Read the article

  • SQL trigger to delete rows from database

    - by wpearse
    I have an industrial system that logs alarms to a remotely hosted MySQL database. The industrial system inserts a new row whenever a property of the alarm changes (such as the time the alarm was activated, acknowledged or switched off) into a table named 'alarms'. I don't want multiple records for each alarm, so I have set up two database triggers. The first trigger mirrors each new record to a second table, creating/updating rows as required: CREATE TRIGGER `mirror_alarms` BEFORE INSERT ON `alarms` FOR EACH ROW INSERT INTO `alarm_display` (Tag,...,OffTime) VALUES (new.Tag,...,new.OffTime) ON DUPLICATE KEY UPDATE OnDate=new.OnDate,...,OffTime=new.OffTime The second trigger should execute after the first and (ideally) delete all rows from the alarms table. (I used the Tag property of the alarm because the Tag property never changes, although I suspect I could just use a 'DELETE FROM alarms WHERE 1' statement to the same effect). CREATE TRIGGER `remove_alarms` AFTER INSERT ON `alarms` FOR EACH ROW DELETE FROM alarms WHERE Tag=new.Tag My problem is that the second trigger doesn't appear to run, or if it does, the second trigger doesn't delete any rows from the database. So here's the question: why does my second trigger not do what I expect it to do?

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >