Search Results

Search found 11190 results on 448 pages for 'bug report'.

Page 421/448 | < Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >

  • IE8 isn't resizing tbody or thead when a column is hidden in a table with table-layout:fixed

    - by tom
    IE 8 is doing something very strange when I hide a column in a table with table-layout:fixed. The column is hidden, the table element stays the same width, but the tbody and thead elements are not resized to fill the remaining width. It works in IE7 mode (and FF, Chrome, etc. of course). Has anyone seen this before or know of a workaround? Here is my test page - toggle the first column and use the dev console to check out the table, tbody and thead width: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html> <head> <title>bug</title> <style type="text/css"> table { table-layout:fixed; width:100%; border-collapse:collapse; } td, th { border:1px solid #000; } </style> </head> <body> <table> <thead> <tr> <th id="target1">1</th> <th>2</th> <th>3</th> <th>4</th> </tr> </thead> <tbody> <tr> <td id="target2">1</td> <td>2</td> <td>3</td> <td>4</td> </tr> </tbody> </table> <a href="#" id="toggle">toggle first column</a> <script type="text/javascript"> function toggleFirstColumn() { if (document.getElementById('target1').style.display=='' || document.getElementById('target1').style.display=='table-cell') { document.getElementById('target1').style.display='none'; document.getElementById('target2').style.display='none'; } else { document.getElementById('target1').style.display='table-cell'; document.getElementById('target2').style.display='table-cell'; } } document.getElementById('toggle').onclick = function(){ toggleFirstColumn(); return false; }; </script> </body> </html>

    Read the article

  • Coolstack MySQL Crash Unable to Restart

    - by rayblasdel
    Environment: Solaris 10 This mysql server has been up and running for 6 months now. Today all of a sudden it crashed. When typing 'mysql' as user it gives the error MYSQL" Error 2002 (HY000): Can't Connect to Local MySQL server though socket '/tmp/mysql.sock' The server try to open mysql, it stays open for 9-10 seconds and restarts the process. Below are the application logs. Application-database-mysql_mysql-csk.log [ May 30 22:37:52 Enabled. ] [ May 30 22:37:58 Rereading configuration. ] [ May 30 22:37:59 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:37:59 Method "start" exited with status 0 ] [ May 30 22:38:13 Stopping because all processes in service exited. ] [ May 30 22:38:13 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:13 Method "stop" exited with status 0 ] [ May 30 22:38:13 Executing start method ("/opt/coolstack/lib/svc/method/svc-cskmysql start") ] /opt/coolstack/mysql/bin/mysqld_safe --user=mysql --datadir=/dbpool1/data --pid-file=/dbpool1/data/database.soliaonline.com.pid [ May 30 22:38:13 Method "start" exited with status 0 ] [ May 30 22:38:25 Stopping because all processes in service exited. ] [ May 30 22:38:25 Executing stop method ("/opt/coolstack/lib/svc/method/svc-cskmysql stop") ] [ May 30 22:38:25 Method "stop" exited with status 0 ] I am hoping someone might have run into this before and might know how to fix it. The following is an excerpt from the MySQL Error log 100530 22:44:03 mysqld_safe mysqld from pid file /dbpool1/data/database.soliaonline.com.pid ended 100530 22:44:04 mysqld_safe Starting mysqld daemon with databases from /dbpool1/data InnoDB: Log scan progressed past the checkpoint lsn 32 727170612 100530 22:44:13 InnoDB: Database was not shut down normally! InnoDB: Starting crash recovery. InnoDB: Reading tablespace information from the .ibd files... InnoDB: Restoring possible half-written data pages from the doublewrite InnoDB: buffer... InnoDB: Doing recovery: scanned up to log sequence number 32 727200901 100530 22:44:14 InnoDB: Starting an apply batch of log records to the database... InnoDB: Progress in percents: 100530 22:44:14 - mysqld got signal 11 ; This could be because you hit a bug. It is also possible that this binary or one of the libraries it was linked against is corrupt, improperly built, or misconfigured. This error can also be caused by malfunctioning hardware. We will try our best to scrape up some info that will hopefully help diagnose the problem, but since we have already crashed, something is definitely wrong and this may fail. key_buffer_size=209715200 read_buffer_size=1048576 max_used_connections=0 max_threads=10000 threads_connected=0 It is possible that mysqld could use up to key_buffer_size + (read_buffer_size + sort_buffer_size)*max_threads = 31024253 K bytes of memory Hope that's ok; if not, decrease some variables in the equation. enter code here

    Read the article

  • Java Http request on api.stackexchange with json response

    - by taoder
    I'm trying to use the api-stackexchange with java but when I do the request and try to parse the response with a json parser I have an error. public ArrayList<Question> readJsonStream(InputStream in) throws IOException { JsonReader reader = new JsonReader(new InputStreamReader(in, "UTF-8")); reader.setLenient(true); try { System.out.println(reader.nextString()); // ? special character return readItem(reader); } finally { reader.close(); } } public ArrayList<Question> readItem(JsonReader reader) throws IOException { ArrayList<Question> questions = new ArrayList<Question>(); reader.beginObject(); while (reader.hasNext()) { System.out.println("here");//not print the error is before String name = reader.nextName(); if (name.equals("items")) { questions = readQuestionsArray(reader); } } reader.endObject(); return questions; } public final static void main(String[] args) throws Exception { URIBuilder builder = new URIBuilder(); builder.setScheme("http").setHost("api.stackexchange.com").setPath("/2.0/search") .setParameter("site", "stackoverflow") .setParameter("intitle" ,"workaround") .setParameter("tagged","javascript"); URI uri = builder.build(); String surl = fixEncoding(uri.toString()+"&filter=!)QWRa9I-CAn0PqgUwq7)DVTM"); System.out.println(surl); Test t = new Test(); try { URL url = new URL(surl); t.readJsonStream(url.openStream()); } catch (IOException e) { e.printStackTrace(); } } And the error is: com.google.gson.stream.MalformedJsonException: Expected literal value at line 1 column 19 Here is an example of the Json : { "items": [ { "question_id": 10842231, "score": 0, "title": "How to push oath token to LocalStorage or LocalSession and listen to the Storage Event? (SoundCloud Php/JS bug workaround)", "tags": [ "javascript", "javascript-events", "local-storage", "soundcloud" ], "answers": [ { "question_id": 10842231, "answer_id": 10857488, "score": 0, "is_accepted": false } ], "link": "http://stackoverflow.com/questions/10842231/how-to-push-oath-token-to-localstorage-or-localsession-and-listen-to-the-storage", "is_answered": false },... So what's the problem? Is the Json really malformed? Or did I do something not right? Thanks, Anthony

    Read the article

  • Parallel For Loop - Problems when adding to a List

    - by Kevin Crowell
    I am having some issues involving Parallel for loops and adding to a List. The problem is, the same code may generate different output at different times. I have set up some test code below. In this code, I create a List of 10,000 int values. 1/10th of the values will be 0, 1/10th of the values will be 1, all the way up to 1/10th of the values being 9. After setting up this List, I setup a Parallel for loop that iterates through the list. If the current number is 0, I add a value to a new List. After the Parallel for loop completes, I output the size of the list. The size should always be 1,000. Most of the time, the correct answer is given. However, I have seen 3 possible incorrect outcomes occur: The size of the list is less than 1,000 An IndexOutOfRangeException occurs @ doubleList.Add(0.0); An ArgumentException occurs @ doubleList.Add(0.0); The message for the ArgumentException given was: Destination array was not long enough. Check destIndex and length, and the array's lower bounds. What could be causing the errors? Is this a .Net bug? Is there something I can do to prevent this from happening? Please try the code for yourself. If you do not get an error, try it a few times. Please also note that you probably will not see any errors using a single-core machine. using System; using System.Collections.Generic; using System.Threading.Tasks; namespace ParallelTest { class Program { static void Main(string[] args) { List<int> intList = new List<int>(); List<double> doubleList = new List<double>(); for (int i = 0; i < 250; i++) { intList.Clear(); doubleList.Clear(); for (int j = 0; j < 10000; j++) { intList.Add(j % 10); } Parallel.For(0, intList.Count, j => { if (intList[j] == 0) { doubleList.Add(0.0); } }); if (doubleList.Count != 1000) { Console.WriteLine("On iteration " + i + ": List size = " + doubleList.Count); } } Console.WriteLine("\nPress any key to exit."); Console.ReadKey(); } } }

    Read the article

  • Zero code coverage with cobertura 1.9.2 but tests are working

    - by eraonel
    I run the code coverage target: <junit fork="yes" dir="${basedir}" failureProperty="test.failed"> <!-- Note the classpath order: instrumented classes are before the original (uninstrumented) classes. This is important. --> <classpath path="${instrumented.dir}" /> <classpath path="${classes.dir}" /> <classpath refid="classpath" /> <!-- The instrumented classes reference classes used by the Cobertura runtime, so Cobertura and its dependencies must be on your classpath. --> <classpath refid="cobertura.classpath" /> <formatter type="xml" /> <!--<test name="${testcase}" todir="${reports.xml.dir}" if="testcase" />--> <batchtest fork="yes" todir="${reports.xml.dir}"> <fileset dir="${classes.dir}"> <include name="**/generated/AllTests.class" /> </fileset> </batchtest> </junit> <junitreport todir="${reports.xml.dir}"> <fileset dir="${reports.xml.dir}"> <include name="TEST-*.xml" /> </fileset> <report format="frames" todir="${reports.html.dir}" /> </junitreport> Then I get the following output ( when using fork="true"): java.lang.reflect.InvocationTargetException at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25) at java.lang.reflect.Method.invoke(Method.java:585) at net.sourceforge.cobertura.util.FileLocker.lock(FileLocker.java:124) at net.sourceforge.cobertura.coveragedata.ProjectData.saveGlobalProjectData(ProjectData.java:331) at net.sourceforge.cobertura.coveragedata.SaveTimer.run(SaveTimer.java:31) at java.lang.Thread.run(Thread.java:595) Caused by: java.io.IOException: No locks available at sun.nio.ch.FileChannelImpl.lock0(Native Method) at sun.nio.ch.FileChannelImpl.lock(FileChannelImpl.java:784) at java.nio.channels.FileChannel.lock(FileChannel.java:865) ... 8 more --------------------------------------- Unable to get lock on /vobs/rnc/rrt/roam2/roamSs/RoamMao_swb/RoamMao_bldu/ant_build/cobertura.ser.lock: null This is known to happen on Linux kernel 2.6.20. Make sure cobertura.jar is in the root classpath of the jvm process running the instrumented code. If the instrumented code is running in a web server, this means cobertura.jar should be in the web server's lib directory. Don't put multiple copies of cobertura.jar in different WEB-INF/lib directories. Only one classloader should load cobertura. It should be the root classloader. I am using Ant 1.7.0 and cobertura 1.9.2. Any ideas why there is no coverage? Test run ok as I see in my target. I have tried to switch java versions ( 1.5.0_06 and 1.6.0_10) but no difference.

    Read the article

  • Using MySQL to generate daily sales reports with filled gaps, grouped by currency

    - by Shane O'Grady
    I'm trying to create what I think is a relatively basic report for an online store, using MySQL 5.1.45 The store can receive payment in multiple currencies. I have created some sample tables with data and am trying to generate a straightforward tabular result set grouped by date and currency so that I can graph these figures. I want to see each currency that is available per date, with a 0 in the result if there were no sales in that currency for that day. If I can get that to work I want to do the same but also grouped by product id. In the sample data I have provided there are only 3 currencies and 2 product ids, but in practice there can be any number of each. I can correctly group by date, but then when I add a grouping by currency my query does not return what I want. I based my work off this article. My reporting query, grouped only by date: SELECT calendar.datefield AS date, IFNULL(SUM(orders.order_value),0) AS total_value FROM orders RIGHT JOIN calendar ON (DATE(orders.order_date) = calendar.datefield) WHERE (calendar.datefield BETWEEN (SELECT MIN(DATE(order_date)) FROM orders) AND (SELECT MAX(DATE(order_date)) FROM orders)) GROUP BY date Now grouped by date and currency: SELECT calendar.datefield AS date, orders.currency_id, IFNULL(SUM(orders.order_value),0) AS total_value FROM orders RIGHT JOIN calendar ON (DATE(orders.order_date) = calendar.datefield) WHERE (calendar.datefield BETWEEN (SELECT MIN(DATE(order_date)) FROM orders) AND (SELECT MAX(DATE(order_date)) FROM orders)) GROUP BY date, orders.currency_id The results I am getting (grouped by date and currency): +------------+-------------+-------------+ | date | currency_id | total_value | +------------+-------------+-------------+ | 2009-08-15 | 3 | 81.94 | | 2009-08-15 | 45 | 25.00 | | 2009-08-15 | 49 | 122.60 | | 2009-08-16 | NULL | 0.00 | | 2009-08-17 | 45 | 25.00 | | 2009-08-17 | 49 | 122.60 | | 2009-08-18 | 3 | 81.94 | | 2009-08-18 | 49 | 245.20 | +------------+-------------+-------------+ The results I want: +------------+-------------+-------------+ | date | currency_id | total_value | +------------+-------------+-------------+ | 2009-08-15 | 3 | 81.94 | | 2009-08-15 | 45 | 25.00 | | 2009-08-15 | 49 | 122.60 | | 2009-08-16 | 3 | 0.00 | | 2009-08-16 | 45 | 0.00 | | 2009-08-16 | 49 | 0.00 | | 2009-08-17 | 3 | 0.00 | | 2009-08-17 | 45 | 25.00 | | 2009-08-17 | 49 | 122.60 | | 2009-08-18 | 3 | 81.94 | | 2009-08-18 | 45 | 0.00 | | 2009-08-18 | 49 | 245.20 | +------------+-------------+-------------+ The schema and data I am using in my tests: CREATE TABLE orders ( id INT PRIMARY KEY AUTO_INCREMENT, order_date DATETIME, order_id INT, product_id INT, currency_id INT, order_value DECIMAL(9,2), customer_id INT ); INSERT INTO orders (order_date, order_id, product_id, currency_id, order_value, customer_id) VALUES ('2009-08-15 10:20:20', '123', '1', '45', '12.50', '322'), ('2009-08-15 12:30:20', '124', '1', '49', '122.60', '400'), ('2009-08-15 13:41:20', '125', '1', '3', '40.97', '324'), ('2009-08-15 10:20:20', '126', '2', '45', '12.50', '345'), ('2009-08-15 13:41:20', '131', '2', '3', '40.97', '756'), ('2009-08-17 10:20:20', '3234', '1', '45', '12.50', '1322'), ('2009-08-17 10:20:20', '4642', '2', '45', '12.50', '1345'), ('2009-08-17 12:30:20', '23', '2', '49', '122.60', '3142'), ('2009-08-18 12:30:20', '2131', '1', '49', '122.60', '4700'), ('2009-08-18 13:41:20', '4568', '1', '3', '40.97', '3274'), ('2009-08-18 12:30:20', '956', '2', '49', '122.60', '3542'), ('2009-08-18 13:41:20', '443', '2', '3', '40.97', '7556'); CREATE TABLE currency ( id INT PRIMARY KEY, name VARCHAR(255) ); INSERT INTO currency (id, name) VALUES (3, 'Euro'), (45, 'US Dollar'), (49, 'CA Dollar'); CREATE TABLE calendar (datefield DATE); DELIMITER | CREATE PROCEDURE fill_calendar(start_date DATE, end_date DATE) BEGIN DECLARE crt_date DATE; SET crt_date=start_date; WHILE crt_date < end_date DO INSERT INTO calendar VALUES(crt_date); SET crt_date = ADDDATE(crt_date, INTERVAL 1 DAY); END WHILE; END | DELIMITER ; CALL fill_calendar('2008-01-01', '2011-12-31');

    Read the article

  • Best practices regarding equals: to overload or not to overload?

    - by polygenelubricants
    Consider the following snippet: import java.util.*; public class EqualsOverload { public static void main(String[] args) { class Thing { final int x; Thing(int x) { this.x = x; } public int hashCode() { return x; } public boolean equals(Thing other) { return this.x == other.x; } } List<Thing> myThings = Arrays.asList(new Thing(42)); System.out.println(myThings.contains(new Thing(42))); // prints "false" } } Note that contains returns false!!! We seems to have lost our things!! The bug, of course, is the fact that we've accidentally overloaded, instead of overridden, Object.equals(Object). If we had written class Thing as follows instead, then contains returns true as expected. class Thing { final int x; Thing(int x) { this.x = x; } public int hashCode() { return x; } @Override public boolean equals(Object o) { return (o instanceof Thing) && (this.x == ((Thing) o).x); } } Effective Java 2nd Edition, Item 36: Consistently use the Override annotation, uses essentially the same argument to recommend that @Override should be used consistently. This advice is good, of course, for if we had tried to declare @Override equals(Thing other) in the first snippet, our friendly little compiler would immediately point out our silly little mistake, since it's an overload, not an override. What the book doesn't specifically cover, however, is whether overloading equals is a good idea to begin with. Essentially, there are 3 situations: Overload only, no override -- ALMOST CERTAINLY WRONG! This is essentially the first snippet above Override only (no overload) -- one way to fix This is essentially the second snippet above Overload and override combo -- another way to fix The 3rd situation is illustrated by the following snippet: class Thing { final int x; Thing(int x) { this.x = x; } public int hashCode() { return x; } public boolean equals(Thing other) { return this.x == other.x; } @Override public boolean equals(Object o) { return (o instanceof Thing) && (this.equals((Thing) o)); } } Here, even though we now have 2 equals method, there is still one equality logic, and it's located in the overload. The @Override simply delegates to the overload. So the questions are: What are the pros and cons of "override only" vs "overload & override combo"? Is there a justification for overloading equals, or is this almost certainly a bad practice?

    Read the article

  • Difference in DocumentBuilder.parse when using JRE 1.5 and JDK 1.6

    - by dhiller
    Recently at last we have switched our projects to Java 1.6. When executing the tests I found out that using 1.6 a SAXParseException is not thrown which has been thrown using 1.5. Below is my test code to demonstrate the problem. import java.io.StringReader; import javax.xml.parsers.DocumentBuilder; import javax.xml.parsers.DocumentBuilderFactory; import javax.xml.transform.stream.StreamSource; import javax.xml.validation.SchemaFactory; import org.junit.Test; import org.xml.sax.InputSource; import org.xml.sax.SAXParseException; /** * Test class to demonstrate the difference between JDK 1.5 to JDK 1.6. * * Seen on Linux: * * <pre> * #java version "1.6.0_18" * Java(TM) SE Runtime Environment (build 1.6.0_18-b07) * Java HotSpot(TM) Server VM (build 16.0-b13, mixed mode) * </pre> * * Seen on OSX: * * <pre> * java version "1.6.0_17" * Java(TM) SE Runtime Environment (build 1.6.0_17-b04-248-10M3025) * Java HotSpot(TM) 64-Bit Server VM (build 14.3-b01-101, mixed mode) * </pre> * * @author dhiller (creator) * @author $Author$ (last editor) * @version $Revision$ * @since 12.03.2010 11:32:31 */ public class TestXMLValidation { /** * Tests the schema validation of an XML against a simple schema. * * @throws Exception * Falls ein Fehler auftritt * @throws junit.framework.AssertionFailedError * Falls eine Unit-Test-Pruefung fehlschlaegt */ @Test(expected = SAXParseException.class) public void testValidate() throws Exception { final StreamSource schema = new StreamSource( new StringReader( "<?xml version=\"1.0\" encoding=\"UTF-8\"?>" + "<xs:schema xmlns:xs=\"http://www.w3.org/2001/XMLSchema\" " + "elementFormDefault=\"qualified\" xmlns:xsd=\"undefined\">" + "<xs:element name=\"Test\"/>" + "</xs:schema>" ) ); final String xml = "<Test42/>"; final DocumentBuilderFactory newFactory = DocumentBuilderFactory.newInstance(); newFactory.setSchema( SchemaFactory.newInstance( "http://www.w3.org/2001/XMLSchema" ).newSchema( schema ) ); final DocumentBuilder documentBuilder = newFactory.newDocumentBuilder(); documentBuilder.parse( new InputSource( new StringReader( xml ) ) ); } } When using a JVM 1.5 the test passes, on 1.6 it fails with "Expected exception SAXParseException". The Javadoc of the DocumentBuilderFactory.setSchema(Schema) Method says: When errors are found by the validator, the parser is responsible to report them to the user-specified ErrorHandler (or if the error handler is not set, ignore them or throw them), just like any other errors found by the parser itself. In other words, if the user-specified ErrorHandler is set, it must receive those errors, and if not, they must be treated according to the implementation specific default error handling rules. The Javadoc of the DocumentBuilder.parse(InputSource) method says: BTW: I tried setting an error handler via setErrorHandler, but there still is no exception. Now my question: What has changed to 1.6 that prevents the schema validation to throw a SAXParseException? Is it related to the schema or to the xml that I tried to parse?

    Read the article

  • How to fine tune FluentNHibernate's auto mapper?

    - by Venemo
    Okay, so yesterday I managed to get the latest trunk builds of NHibernate and FluentNHibernate to work with my latest little project. (I'm working on a bug tracking application.) I created a nice data access layer using the Repository pattern. I decided that my entities are nothing special, and also that with the current maturity of ORMs, I don't want to hand-craft the database. So, I chose to use FluentNHibernate's auto mapping feature with NHibernate's "hbm2ddl.auto" property set to "create". It really works like a charm. I put the NHibernate configuration in my app domain's config file, set it up, and started playing with it. (For the time being, I created some unit tests only.) It created all tables in the database, and everything I need for it. It even mapped my many-to-many relationships correctly. However, there are a few small glitches: All of the columns created in the DB allow null. I understand that it can't predict which properties should allow null and which shouldn't, but at least I'd like to tell it that it should allow null only for those types for which null makes sense in .NET (eg. non-nullable value types shouldn't allow null). All of the nvarchar and varbinary columns it created, have a default length of 255. I would prefer to have them on max instead of that. Is there a way to tell the auto mapper about the two simple rules above? If the answer is no, will it work correctly if I modify the tables it created? (So, if I set some columns not to allow null, and change the allowed length for some other, will it correctly work with them?) EDIT: I managed to achieve the above by using Fluent NHibernate's convention API. Thanks to everyone who helped! However, there is one more thing: after checking out the convention API, I really would like my IDs to be calld "ID", not "Id", but it seems to me that the PrimaryKey.Name.Is(x => "ID") is not working at all. If I add it to the conventions collection and rewrite my entities' properties to "ID" instead of "Id", it throws an exception that there is no primary key mapped. Any thoughts on this?

    Read the article

  • Memory corruption in System.Move due to changed 8087CW mode (png + stretchblt)

    - by André Mussche
    I have strange a memory corruption problem. After many hours debugging and trying I think I found something. For example: I do a simple string assignment: sTest := 'SET LOCK_TIMEOUT '; However, the result sometimes becomes: sTest = 'SET LOCK'#0'TIMEOUT ' So, the _ gets replaced by an 0 byte. I have seen this happening once (reproducing is tricky, dependent on timing) in the System.Move function, when it uses the FPU stack (fild, fistp) for fast memory copy (in case of 9 till 32 bytes to move): ... @@SmallMove: {9..32 Byte Move} fild qword ptr [eax+ecx] {Load Last 8} fild qword ptr [eax] {Load First 8} cmp ecx, 8 jle @@Small16 fild qword ptr [eax+8] {Load Second 8} cmp ecx, 16 jle @@Small24 fild qword ptr [eax+16] {Load Third 8} fistp qword ptr [edx+16] {Save Third 8} ... Using the FPU view and 2 memory debug views (Delphi - View - Debug - CPU - Memory) I saw it going wrong... once... could not reproduce however... This morning I read something about the 8087CW mode, and yes, if this is changed into $27F I get memory corruption! Normally it is $133F: The difference between $133F and $027F is that $027F sets up the FPU for doing less precise calculations (limiting to Double in stead of Extended) and different infiniti handling (which was used for older FPU’s, but is not used any more). Okay, now I found why but not when! I changed the working of my AsmProfiler with a simple check (so all functions are checked at enter and leave): if Get8087CW = $27F then //normally $1372? if MainThreadID = GetCurrentThreadId then //only check mainthread DebugBreak; I "profiled" some units and dll's and bingo (see stack): Windows.StretchBlt(3372289943,0,0,514,345,4211154027,0,0,514,345,13369376) pngimage.TPNGObject.DrawPartialTrans(4211154027,(0, 0, 514, 345, (0, 0), (514, 345))) pngimage.TPNGObject.Draw($7FF62450,(0, 0, 514, 345, (0, 0), (514, 345))) Graphics.TCanvas.StretchDraw((0, 0, 514, 345, (0, 0), (514, 345)),$7FECF3D0) ExtCtrls.TImage.Paint Controls.TGraphicControl.WMPaint((15, 4211154027, 0, 0)) So it is happening in StretchBlt... What to do now? Is it a fault of Windows, or a bug in PNG (included in D2007)? Or is the System.Move function not failsafe?

    Read the article

  • Why an object declared in method is subject to garbage collection before the method returns?

    - by SiLent SoNG
    Consider an object declared in a method: public void foo() { final Object obj = new Object(); // A long run job that consumes tons of memory and // triggers garbage collection } Will obj be subject to garbage collection before foo() returns? UPDATE: Previously I thought obj is not subject to garbage collection until foo() returns. However, today I find myself wrong. I have spend several hours in fixing a bug and finally found the problem is caused by obj garbage collected! Can anyone explain why this happens? And if I want obj to be pinned how to achieve it? Here is the code that has problem. public class Program { public static void main(String[] args) throws Exception { String connectionString = "jdbc:mysql://<whatever>"; // I find wrap is gc-ed somewhere SqlConnection wrap = new SqlConnection(connectionString); Connection con = wrap.currentConnection(); Statement stmt = con.createStatement(ResultSet.TYPE_FORWARD_ONLY, ResultSet.CONCUR_READ_ONLY); stmt.setFetchSize(Integer.MIN_VALUE); ResultSet rs = stmt.executeQuery("select instance_id, doc_id from crawler_archive.documents"); while (rs.next()) { int instanceID = rs.getInt(1); int docID = rs.getInt(2); if (docID % 1000 == 0) { System.out.println(docID); } } rs.close(); //wrap.close(); } } After running the Java program, it will print the following message before it crashes: 161000 161000 ******************************** Finalizer CALLED!! ******************************** ******************************** Close CALLED!! ******************************** 162000 Exception in thread "main" com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: And here is the code of class SqlConnection: class SqlConnection { private final String connectionString; private Connection connection; public SqlConnection(String connectionString) { this.connectionString = connectionString; } public synchronized Connection currentConnection() throws SQLException { if (this.connection == null || this.connection.isClosed()) { this.closeConnection(); this.connection = DriverManager.getConnection(connectionString); } return this.connection; } protected void finalize() throws Throwable { try { System.out.println("********************************"); System.out.println("Finalizer CALLED!!"); System.out.println("********************************"); this.close(); } finally { super.finalize(); } } public void close() { System.out.println("********************************"); System.out.println("Close CALLED!!"); System.out.println("********************************"); this.closeConnection(); } protected void closeConnection() { if (this.connection != null) { try { connection.close(); } catch (Throwable e) { } finally { this.connection = null; } } } }

    Read the article

  • XY-Scatter Chart In SSRS Won't Display Points

    - by Dalin Seivewright
    I'm a bit confused with this one. I have a Dataset with a BackupDate and a BackupTime as well as a BackupType. The BackupDate is comprised of 12 characters from the left of a datetime string within a table. The BackupTime is comprised of 8 characters from the right of that same datetime string. So for example: BackupDate would be 'December 12 2008' and the BackupTime would be '12:53PM.' I have added an XY-scatter chart to the report. I've added a 'series' value for the BackupType (so one can distinguish between a Full/Incr/Log backup). I've added a category value of BackupDate and set the Scale for the X-axis from the Min of BackupDate to the Max of BackupDate. I've then added an item to the Values with the Y variable set to BackupTime and the X variable set to BackupDate. The interval for the Y-axis is 12:00AM to 11:59PM and the formatting for the labels is 'hh:mmtt'. The BackupTime matches the format of the Y-axis. The BackupDate matches the format of the X-axis. 10 entries are retrieved by my Dataset and the Legend is properly populated by the BackupType field. No points are being plotted on the graph and no markers/pointers are shown if they are enabled. There should be a point on the graph for every point in time of each day there is a backup of a specific type. Am I missing something? Does anyone know of a good tutorial dealing specifically with XY-scatter graphs and using them in a way I intend? I am using the 2005 version of SSRS rather than the 2008 version. Screenshot of what my chart currently looks like: In case it could be dataset related: SELECT TOP (10) backup_type, LTRIM(RTRIM(LEFT(backup_finish_date, 12))) AS BackupDate, LTRIM(RTRIM(RIGHT(backup_finish_date, 8))) AS BackupTime FROM DBARepository.Backup_History As requested, here are the results of this query. There is a Where clause to constrain the results to a specific database of a specific server that was not included in the above SQL Query. Log Dec 26 2008 12:00PM Log Dec 27 2008 4:00AM Log Dec 27 2008 8:00AM Log Dec 27 2008 12:00PM Log Dec 27 2008 4:00PM Log Dec 27 2008 8:00PM Database Dec 27 2008 10:01PM Log Dec 28 2008 12:00AM Log Dec 28 2008 4:00AM Log Dec 28 2008 8:00AM

    Read the article

  • Myself throwing NullReferenceException... needs help

    - by Amit Ranjan
    I know it might be a weird question and its Title too, but i need your help. I am a .net dev , working on platform for the last 1.5 years. I am bit confused on the term usually we say " A Good Programmer ". I dont know ,what are the qualities of a good programmer ? Is the guy who writes a bug free code? or Can develop applications solely? or blah blah blah...lots of points. I dont know... But as far i am concerned , I know I am not a good programmer, still in learning phase an needs a lot to learn in coming days. So you guys are requested to please help me with this two problems of mine My first problem is regarding the proper Error Handling, which is a most debatable aspect of programming. We all know we use ` try { } catch { } finally { } ` in our code to manage exception. But even if I use try { } catch(exception ex) { throw ex } finally { } , different guys have different views. I still dont know the good way to handle errors. I can write code, use try-catch but still i feel I lacks something. When I saw the codes generated by .net fx tools even they uses throw ex or `throw new Exception("this is my exception")`.. I am just wondering what will be the best way to achieve the above. All means the same thing but why we avoid something. If it has some demerits then it must be made obselete.Anyways I still dont have one [how to handle errors efficiently?]. I generally follow the try-catch(execoption ex){throw ex}, and usually got stucked in debates with leads why you follow this why not that... 2.Converting your entire code blocks in modules using Design patterns of some OOPs concepts. How do you guys decide what architeture or pattern will be the best for my upcoming application based on its working, flow etc. I need to know what you guys can see that I can't. Since I know , I dont have that much experience but I can say, with my experience that experience doesnot comes either from degree/certificates or success you made instead it cames from failures you faced or got stucking situations. Pleas help me out.

    Read the article

  • C program using inotify to monitor multiple directories along with sub-directories?

    - by lakshmipathi
    I have program which monitors a directory (/test) and notify me. I want to improve this to monitor another directory (say /opt). And also how to monitor it's subdirectories , current i'll get notified if any changes made to files under /test . but i'm not getting any inotifcation if changes made sub-directory of /test, that is touch /test/sub-dir/files.txt .. Here my current code - hope this will help /* Simple example for inotify in Linux. inotify has 3 main functions. inotify_init1 to initialize inotify_add_watch to add monitor then inotify_??_watch to rm monitor.you the what to replace with ??. yes third one is inotify_rm_watch() */ #include <sys/inotify.h> int main(){ int fd,wd,wd1,i=0,len=0; char pathname[100],buf[1024]; struct inotify_event *event; fd=inotify_init1(IN_NONBLOCK); /* watch /test directory for any activity and report it back to me */ wd=inotify_add_watch(fd,"/test",IN_ALL_EVENTS); while(1){ //read 1024 bytes of events from fd into buf i=0; len=read(fd,buf,1024); while(i<len){ event=(struct inotify_event *) &buf[i]; /* check for changes */ if(event->mask & IN_OPEN) printf("%s :was opened\n",event->name); if(event->mask & IN_MODIFY) printf("%s : modified\n",event->name); if(event->mask & IN_ATTRIB) printf("%s :meta data changed\n",event->name); if(event->mask & IN_ACCESS) printf("%s :was read\n",event->name); if(event->mask & IN_CLOSE_WRITE) printf("%s :file opened for writing was closed\n",event->name); if(event->mask & IN_CLOSE_NOWRITE) printf("%s :file opened not for writing was closed\n",event->name); if(event->mask & IN_DELETE_SELF) printf("%s :deleted\n",event->name); if(event->mask & IN_DELETE) printf("%s :deleted\n",event->name); /* update index to start of next event */ i+=sizeof(struct inotify_event)+event->len; } } }

    Read the article

  • CDI SessionScoped Bean results in two instances in same session

    - by Ryan
    I've got two instances of a SessionScoped CDI bean for the same session. I was under the impression that there would be one instance generated for me by CDI, but it generated two. Am I misunderstanding how CDI works, or did I find a bug? Here is the bean code: package org.mycompany.myproject.session; import java.io.Serializable; import javax.enterprise.context.SessionScoped; import javax.faces.context.FacesContext; import javax.inject.Named; import javax.servlet.http.HttpSession; @Named @SessionScoped public class MyBean implements Serializable { private String myField = null; public MyBean() { System.out.println("MyBean constructor called"); FacesContext fc = FacesContext.getCurrentInstance(); HttpSession session = (HttpSession)fc.getExternalContext().getSession(false); String sessionId = session.getId(); System.out.println("Session ID: " + sessionId); } public String getMyField() { return myField; } public void setMyField(String myField) { this.myField = myField; } } Here is the Facelet code: <?xml version='1.0' encoding='UTF-8' ?> <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml" xmlns:h="http://java.sun.com/jsf/html" xmlns:f="http://java.sun.com/jsf/core"> <f:view contentType="text/html" encoding="UTF-8"> <h:head> <title>Test</title> </h:head> <h:body> <h:form id="form"> <h:inputText value="#{myBean.myField}"/> <h:commandButton value="Submit"/> </h:form> </h:body> </f:view> </html> Here is the output from deployment and navigating to page: INFO: Loading application org.mycompany_myproject_war_1.0-SNAPSHOT at /myproject INFO: org.mycompany_myproject_war_1.0-SNAPSHOT was successfully deployed in 8,237 milliseconds. INFO: MyBean constructor called INFO: Session ID: 175355b0e10fe1d0778238bf4634 INFO: MyBean constructor called INFO: Session ID: 175355b0e10fe1d0778238bf4634 Using GlassFish 3.0.1

    Read the article

  • Registry Problem

    - by Dominik
    I made a launcher for my game server. (World of Warcraft) I want to get the installpath of the game, browsed by the user. I'm using this code to browse, and get the installpath, then set some other strings from the installpath string, then just strore in my registry key. using System; using System.Drawing; using System.Reflection; using System.Collections; using System.ComponentModel; using System.Windows.Forms; using System.Data; using Microsoft.Win32; using System.IO; using System.Net.NetworkInformation; using System.Diagnostics; using System.Runtime; using System.Runtime.InteropServices; using System.Security; using System.Security.Cryptography; using System.Text; using System.Net; using System.Linq; using System.Net.Sockets; using System.Collections.Generic; using System.Threading; namespace WindowsFormsApplication1 { public partial class Form1 : Form { public Form1() { InitializeComponent(); } string InstallPath, WoWExe, PatchPath; private void Form1_Load(object sender, EventArgs e) { RegistryKey LocalMachineKey_Existence; MessageBox.Show("Browse your install location.", "Select Wow.exe"); OpenFileDialog BrowseInstallPath = new OpenFileDialog(); BrowseInstallPath.Filter = "wow.exe|*.exe"; if (BrowseInstallPath.ShowDialog() == DialogResult.OK) { InstallPath = System.IO.Path.GetDirectoryName(BrowseInstallPath.FileName); WoWExe = InstallPath + "\\wow.exe"; PatchPath = InstallPath + "\\Data\\"; LocalMachineKey_Existence = Registry.LocalMachine.CreateSubKey(@"SOFTWARE\ExistenceWoW"); LocalMachineKey_Existence.SetValue("InstallPathLocation", InstallPath); LocalMachineKey_Existence.SetValue("PatchPathLocation", PatchPath); LocalMachineKey_Existence.SetValue("WoWExeLocation", WoWExe); } } } } The problem is: On some computer, it doesnt stores like it should be. For example, your wow.exe is in C:\ASD\wow.exe, your select it with the browse windows, then the program should store it in the Existence registry key as C:\ASD\Data\ but it stores like this: C:\ASDData , so it forgots a backslash :S Look at this picture: http://img21.imageshack.us/img21/2829/regedita.jpg My program works cool on my PC, and on my friends pc, but on some pc this "bug" comes out :S I have windows 7, with .NEt 3.5 Please help me.

    Read the article

  • can we include maven pmd plugin execution within build goal ?

    - by RN
    Guys, I wanted generate the pmd report while building the project so I have added plugin to build section of my pom.xml but still it don't execute until I explicitly call mvn clean install pmd:pmd. I want to execute it with mvn clean install itself. is it possible ? my pom entries are as under: <build> <plugins> <plugin> <artifactId>maven-pmd-plugin</artifactId> <version>2.4</version> <configuration> <skip>false</skip> <targetJdk>${compile.source}</targetJdk> <rulesets> <ruleset>./current.pmd.rules.xml</ruleset> </rulesets> <excludes> <exclude>com/cm/**/*.java</exclude> <exclude>com/sm/**/*.java</exclude> </excludes> <linkXref>true</linkXref> <failOnViolation>true</failOnViolation> <executions> <execution> <goals> <goal>check</goal> <goal>cpd-check</goal> </goals> </execution> </executions> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-jxr-plugin</artifactId> </plugin> <plugin> <artifactId>maven-project-info-reports-plugin</artifactId> <version>2.0.1</version> </plugin> </plugins> </build> Thanks in advance.

    Read the article

  • ReadFile doesn't work asynchronously on Win7 and Win2k8

    - by f0b0s
    According to MSDN ReadFile can read data 2 different ways: synchronously and asynchronously. I need the second one. The folowing code demonstrates usage with OVERLAPPED struct: #include <windows.h> #include <stdio.h> #include <time.h> void Read() { HANDLE hFile = CreateFileA("c:\\1.avi", GENERIC_READ, 0, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); if ( hFile == INVALID_HANDLE_VALUE ) { printf("Failed to open the file\n"); return; } int dataSize = 256 * 1024 * 1024; char* data = (char*)malloc(dataSize); memset(data, 0xFF, dataSize); OVERLAPPED overlapped; memset(&overlapped, 0, sizeof(overlapped)); printf("reading: %d\n", time(NULL)); BOOL result = ReadFile(hFile, data, dataSize, NULL, &overlapped); printf("sent: %d\n", time(NULL)); DWORD bytesRead; result = GetOverlappedResult(hFile, &overlapped, &bytesRead, TRUE); // wait until completion - returns immediately printf("done: %d\n", time(NULL)); CloseHandle(hFile); } int main() { Read(); } On Windows XP output is: reading: 1296651896 sent: 1296651896 done: 1296651899 It means that ReadFile didn't block and returned imediatly at the same second, whereas reading process continued for 3 seconds. It is normal async reading. But on windows 7 and windows 2008 I get following results: reading: 1296661205 sent: 1296661209 done: 1296661209. It is a behavior of sync reading. MSDN says that async ReadFile sometimes can behave as sync (when the file is compressed or encrypted for example). But the return value in this situation should be TRUE and GetLastError() == NO_ERROR. On Windows 7 I get FALSE and GetLastError() == ERROR_IO_PENDING. So WinApi tells me that it is an async call, but when I look at the test I see that it is not! I'm not the only one who found this "bug": read the comment on ReadFile MSDN page. So what's the solution? Does anybody know? It is been 14 months after Denis found this strange behavior.

    Read the article

  • Greasemonkey is getting an empty document.body on select Google pages.

    - by Brock Adams
    Hi, I have a Greasemonkey script that processes Google search results. But it's failing in a few instances, when xpath searches (and document body) appear to be empty. Running the code in Firebug's console works every time. It only fails in a Greasemonkey script. Greasemonkey sees an empty document.body. I've boiled the problem down to a test, greasemonkey script, below. I'm using Firefox 3.5.9 and Greasemonkey 0.8.20100408.6 (but earlier versions had the same problem). Problem: Greasemonkey sees an empty document.body. Recipe to Duplicate: Install the Greasemonkey script. Open a new tab or window. Navigate to Google.com (http://www.google.com/). Search on a simple term like "cats". Check Firefox's Error console (Ctrl-shift-J) or Firebug's console. The script will report that document body is empty. Hit refresh. The script will show a good result (document body found). Note that the failure only reliably appears on Google results obtained this way, and on a new tab/window. Turn javascript off globally (javascript.enabled set to false in about:config). Repeat steps 2 thru 5. Only now the Greasemonkey script will work. It seems that Google javascript is killing the DOM tree for greasemonkey, somehow. I've tried a time-delayed retest and even a programmatic refresh; the script still fails to see the document body. Test Script: // // ==UserScript== // @name TROUBLESHOOTING 2 snippets // @namespace http://www.google.com/ // @description For code that has funky misfires and defies standard debugging. // @include http://*/* // ==/UserScript== // function LocalMain (sTitle) { var sUserMessage = ''; //var sRawHtml = unsafeWindow.document.body.innerHTML; //-- unsafeWindow makes no difference. var sRawHtml = document.body.innerHTML; if (sRawHtml) { sRawHtml = sRawHtml.replace (/^\s\s*/, ''). substr (0, 60); sUserMessage = sTitle + ', Doc body = ' + sRawHtml + ' ...'; } else { sUserMessage = sTitle + ', Document body seems empty!'; } if (typeof (console) != "undefined") { console.log (sUserMessage); } else { if (typeof (GM_log) != "undefined") GM_log (sUserMessage); else if (!sRawHtml) alert (sUserMessage); } } LocalMain ('Preload'); window.addEventListener ("load", function() {LocalMain ('After load');}, false);

    Read the article

  • Monitoring UDP socket in glib(mm) eats up CPU time

    - by Gyorgy Szekely
    Hi, I have a GTKmm Windows application (built with MinGW) that receives UDP packets (no sending). The socket is native winsock and I use glibmm IOChannel to connect it to the application main loop. The socket is read with recvfrom. My problem is: this setup eats 25% percent CPU time on a 3GHz workstation. Can somebody tell me why? The application is idle in this case, and if I remove the UDP code, CPU usage drops down to almost zero. As the application has to perform some CPU intensive tasks, I could image better ways to spend that 25% Here are some code excerpts: (sorry for the printf's ;) ) /* bind */ void UDPInterface::bindToPort(unsigned short port) { struct sockaddr_in target; WSADATA wsaData; target.sin_family = AF_INET; target.sin_port = htons(port); target.sin_addr.s_addr = 0; if ( WSAStartup ( 0x0202, &wsaData ) ) { printf("WSAStartup failed!\n"); exit(0); // :) WSACleanup(); } sock = socket( AF_INET, SOCK_DGRAM, 0 ); if (sock == INVALID_SOCKET) { printf("invalid socket!\n"); exit(0); } if (bind(sock,(struct sockaddr*) &target, sizeof(struct sockaddr_in) ) == SOCKET_ERROR) { printf("failed to bind to port!\n"); exit(0); } printf("[UDPInterface::bindToPort] listening on port %i\n", port); } /* read */ bool UDPInterface::UDPEvent(Glib::IOCondition io_condition) { recvfrom(sock, (char*)buf, BUF_SIZE*4, 0, NULL, NULL); /* process packet... */ } /* glibmm connect */ Glib::RefPtr channel = Glib::IOChannel::create_from_win32_socket(udp.sock); Glib::signal_io().connect( sigc::mem_fun(udp, &UDPInterface::UDPEvent), channel, Glib::IO_IN ); I've read here in some other question, and also in glib docs (g_io_channel_win32_new_socket()) that the socket is put into nonblocking mode, and it's "a side-effect of the implementation and unavoidable". Does this explain the CPU effect, it's not clear to me? Whether or not I use glib to access the socket or call recvfrom() directly doesn't seem to make much difference, since CPU is used up before any packet arrives and the read handler gets invoked. Also glibmm docs state that it's ok to call recvfrom() even if the socket is polled (Glib::IOChannel::create_from_win32_socket()) I've tried compiling the program with -pg and created a per function cpu usage report with gprof. This wasn't usefull because the time is not spent in my program, but in some external glib/glibmm dll.

    Read the article

  • IE browser caching and the jQuery Form Plugin

    - by Harfleur
    Like so many lost souls before me, I'm floundering in the snake pit that is Ajax form submission and IE browser caching. I'm trying to write a simple script using the jQuery Form Plugin to Ajaxify Wordpress comments. It's working fine in Firefox, Chrome, Safari, et. al., but in IE, the response text is cached with the result that Ajax is pulling in the wrong comment. jQuery(this).ajaxSubmit({ success: function(data) { var response = $("<ol>"+data+"</ol>"); response.find('.commentlist li:last').hide().appendTo(jQuery('.commentlist')).slideDown('slow'); } }); ajaxSubmit sends the comment to wp-comments-post.php, which inelegantly spits back the entire page as a response. So, despite the fact that it's ugly as toads, I'm sticking the response text in a variable, using :last to isolate the most recent comment, and sliding it down in its place. IE, however, is returning the cached version of the page, which doesn't include the new comment. So ".commentlist li:last" selects the previous comment, a duplicate of which then uselessly slides down beneath the original. I've tried setting "cache: false" in the ajaxSubmit options, but it has no effect. I've tried setting a url option and tacking on a random number or timestamp, but it winds up being attached to the POST that submits the comment to the server rather than the GET that returns the response, and so has no effect. I'm not sure what else to try. Everything works fine in IE if I turn off browser caching, but that's obviously not something I can expect anyone viewing the page to do. Any help will be hugely appreciated. Thanks in advance! EDIT WITH A PROGRESS REPORT: A couple of people have suggested using PHP headers to prevent caching, and this does indeed work. The trouble is that wp-comments-post is spitting back the entire page when a new comment is submitted, and the only way I can see to add headers is to put them in the Wordpress post template, which disables caching on all posts at all times--not quite the behavior I'm looking for. Is there a way to set a php conditional--"if is_ajax" or something like that--that would keep the headers from being applied during regular pageloads, but plug them in if the page was called by an Ajax GET?

    Read the article

  • How to get Firefox to not continue to show "Transferring data from..." in browser status bar after a

    - by Edward Tanguay
    The following silverlight demo loads and displays a text file from a server. However, in Firefox (but not Explorer or Chrome) after you click the button and the text displays, the status bar continues to show "Transferring data from test.development..." which erroneously gives the user the belief that something is still loading. I've noticed that if you click on another Firefox tab and then back on the original one, the message goes away, so it seems to be just a Firefox bug that doesn't clear the status bar automatically. Is there a way to clear the status bar automatically in Firefox? or a way to explicitly tell it that the async loading is finished so it can clear the status bar itself? XAML: <UserControl x:Class="TestLoad1111.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"> <StackPanel Margin="10"> <StackPanel HorizontalAlignment="Left" Orientation="Horizontal" Margin="0 0 0 10"> <Button Content="Load Text" Click="Button_LoadText_Click"/> </StackPanel> <TextBlock Text="{Binding Message}" /> </StackPanel> </UserControl> Code Behind: using System; using System.Net; using System.Windows; using System.Windows.Controls; using System.ComponentModel; namespace TestLoad1111 { public partial class MainPage : UserControl, INotifyPropertyChanged { #region ViewModelProperty: Message private string _message; public string Message { get { return _message; } set { _message = value; OnPropertyChanged("Message"); } } #endregion public MainPage() { InitializeComponent(); DataContext = this; } private void Button_LoadText_Click(object sender, RoutedEventArgs e) { WebClient webClientTextLoader = new WebClient(); webClientTextLoader.DownloadStringAsync(new Uri("http://test.development:111/testdata/test.txt?" + Helpers.GetRandomizedSuffixToPreventCaching())); webClientTextLoader.DownloadStringCompleted += new DownloadStringCompletedEventHandler(webClientTextLoader_DownloadStringCompleted); } void webClientTextLoader_DownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e) { Message = e.Result; } #region INotifiedProperty Block public event PropertyChangedEventHandler PropertyChanged; protected void OnPropertyChanged(string propertyName) { PropertyChangedEventHandler handler = PropertyChanged; if (handler != null) { handler(this, new PropertyChangedEventArgs(propertyName)); } } #endregion } public static class Helpers { private static Random random = new Random(); public static int GetRandomizedSuffixToPreventCaching() { return random.Next(10000, 99999); } } }

    Read the article

  • Does this incorporate JavaScript closures?

    - by alex
    In trying to learn JavaScript closures, I've confused myself a bit. From what I've gathered over the web, a closure is... Declaring a function within another function, and that inner function has access to its parent function's variables, even after that parent function has returned. Here is a small sample of script from a recent project. It allows text in a div to be scrolled up and down by buttons. var pageScroll = (function() { var $page, $next, $prev, canScroll = true, textHeight, scrollHeight; var init = function() { $page = $('#secondary-page'); // reset text $page.scrollTop(0); textHeight = $page.outerHeight(); scrollHeight = $page.attr('scrollHeight'); if (textHeight === scrollHeight) { // not enough text to scroll return false; }; $page.after('<div id="page-controls"><button id="page-prev">prev</button><button id="page-next">next</button></div>'); $next = $('#page-next'); $prev = $('#page-prev'); $prev.hide(); $next.click(scrollDown); $prev.click(scrollUp); }; var scrollDown = function() { if ( ! canScroll) return; canScroll = false; var scrollTop = $page.scrollTop(); $prev.fadeIn(500); if (scrollTop == textHeight) { // can we scroll any lower? $next.fadeOut(500); } $page.animate({ scrollTop: '+=' + textHeight + 'px'}, 500, function() { canScroll = true; }); }; var scrollUp = function() { $next.fadeIn(500); $prev.fadeOut(500); $page.animate({ scrollTop: 0}, 500); }; $(document).ready(init); }()); Does this example use closures? I know it has functions within functions, but is there a case where the outer variables being preserved is being used? Am I using them without knowing it? Thanks Update Would this make a closure if I placed this beneath the $(document).ready(init); statement? return { scrollDown: scrollDown }; Could it then be, if I wanted to make the text scroll down from anywhere else in JavaScript, I could do pageScroll.scrollDown(); I'm going to have a play around on http://www.jsbin.com and report back

    Read the article

  • How do you combine "Revision Control" with "WorkFlow" for R?

    - by Tal Galili
    Hello all, I remember coming across R users writing that they use "Revision control" (e.g: "Source control"), and I am curious to know: How do you combine "Revision control" with your statistical analysis WorkFlow? Two (very) interesting discussions talk about how to deal with the WorkFlow. But neither of them refer to the revision control element: http://stackoverflow.com/questions/1266279/how-to-organize-large-r-programs http://stackoverflow.com/questions/1429907/workflow-for-statistical-analysis-and-report-writing A Long Update To The Question: Following some of the people's answers, and Dirk's question in the comment, I would like to direct my question a bit more. After reading the Wiki article about "revision control" (which I was previously not familiar with), it was clear to me that when using revision control, what one does is to build a development structure of his code. This structure either leads to a "final product" or to several branches. When building something like, let's say, a website. There is usually one end product you work towards (the website), with some prototypes along the way. But when doing a statistical analysis, the work (to my view) is different. Sometimes you know where you want to get to. But more often, you explore. Explore cleaning the dataset. Explore different methods for statistical analysis, and ask various questions of your data (and I am writing this, knowing how Frank Harrell, and other experience statisticians feels about Data dredging). That is way the WorkFlow question with statistical programming is (in my view) a serious and deep question, raising many issues, The simpler ones are technical: Which revision control software do you use (and why) ? Which IDE do you use(and why) ? The more interesting question are about work process: How do you structure your files? What do you keep as a separate file and what as a revision? or asking in a different way - What should be a "branch" and what should be a "sub project" in your code? For example: When starting to explore your data, should a plot be creating and then erased because it didn't lead any where (but kept as a revision) or should there be a backup file of that path? How you solve this tension was my initial curiosity. The second question is "what might I be missing?". What rules (of thumb) should one follow so to avoid common pitfalls doing statistical programming with version control? In my intuition, I feel that statistical programming is inherently different then software development (I am writing this without being a real expert in statistical programming, and even less so in software development). That's way I am unsure which of the lessons I have read here about version control would be applicable. Thanks a lot, Tal

    Read the article

  • base64-Encoding breaks smime-encrypted emaildata

    - by Streuner
    I'm using Mime::Lite to create and send E-Mails. Now I need to add support for S/Mime-encryption and finally could encrypt my E-Mail (the only Perllib I could install seems broken, so I'm using a systemcall and openssl smime), but when I try to create a mime-object with it, the E-Mail will be broken as soon as I set the Content-Transfer-Encoding to base64. To make it even more curious, it happens only if I set it via $myMessage->attr. If I'm using the constructor -new everything is fine, besides a little warning which I suppress by using MIME::Lite->quiet(1); Is it a bug or my fault? Here are the two ways how I create the mime-object. Setting the Content-Transfer-Encoding via construtor and suppress the warning: MIME::Lite->quiet(1); my $msgEncr = MIME::Lite->new(From =>'[email protected]', To => '[email protected]', Subject => 'SMIME Test', Data => $myEncryptedMessage, 'Content-Transfer-Encoding' => 'base64'); $msgEncr->attr('Content-Disposition' => 'attachment'); $msgEncr->attr('Content-Disposition.filename' => 'smime.p7m'); $msgEncr->attr('Content-Type' => 'application/x-pkcs7-mime'); $msgEncr->attr('Content-Type.smime-type' => 'enveloped-data'); $msgEncr->attr('Content-Type.name' => 'smime.p7m'); $msgEncr->send; MIME::Lite->quiet(0); Setting the Content-Transfer-Encoding via $myMessage->attr which breaks the encrypted Data, but won't cause a warning: my $msgEncr = MIME::Lite->new(From => '[email protected]', To => '[email protected]', Subject => 'SMIME Test', Data => $myEncryptedMessage); $msgEncr->attr('Content-Disposition' => 'attachment'); $msgEncr->attr('Content-Disposition.filename' => 'smime.p7m'); $msgEncr->attr('Content-Type' => 'application/x-pkcs7-mime'); $msgEncr->attr('Content-Type.smime-type' => 'enveloped-data'); $msgEncr->attr('Content-Type.name' => 'smime.p7m'); $msgEncr->attr('Content-Transfer-Encoding' => 'base64'); $msgEncr->send; I just don't get why my message is broken when I'm using the attribute-setter. Thanks in advance for your help! Besides that i'm unable to attach any file to this E-Mail without breaking the encrypted message again.

    Read the article

< Previous Page | 417 418 419 420 421 422 423 424 425 426 427 428  | Next Page >