Search Results

Search found 1172 results on 47 pages for 'measure'.

Page 39/47 | < Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >

  • Calculating confidence intervals for a non-normal distribution

    - by Josiah
    Hi all, First, I should specify that my knowledge of statistics is fairly limited, so please forgive me if my question seems trivial or perhaps doesn't even make sense. I have data that doesn't appear to be normally distributed. Typically, when I plot confidence intervals, I would use the mean +- 2 standard deviations, but I don't think that is acceptible for a non-uniform distribution. My sample size is currently set to 1000 samples, which would seem like enough to determine if it was a normal distribution or not. I use Matlab for all my processing, so are there any functions in Matlab that would make it easy to calculate the confidence intervals (say 95%)? I know there are the 'quantile' and 'prctile' functions, but I'm not sure if that's what I need to use. The function 'mle' also returns confidence intervals for normally distributed data, although you can also supply your own pdf. Could I use ksdensity to create a pdf for my data, then feed that pdf into the mle function to give me confidence intervals? Also, how would I go about determining if my data is normally distributed. I mean I can currently tell just by looking at the histogram or pdf from ksdensity, but is there a way to quantitatively measure it? Thanks!

    Read the article

  • use proxy in python to fetch a webpage

    - by carmao
    I am trying to write a function in Python to use a public anonymous proxy and fetch a webpage, but I got a rather strange error. The code (I have Python 2.4): import urllib2 def get_source_html_proxy(url, pip, timeout): # timeout in seconds (maximum number of seconds willing for the code to wait in # case there is a proxy that is not working, then it gives up) proxy_handler = urllib2.ProxyHandler({'http': pip}) opener = urllib2.build_opener(proxy_handler) opener.addheaders = [('User-agent', 'Mozilla/5.0')] urllib2.install_opener(opener) req=urllib2.Request(url) sock=urllib2.urlopen(req) timp=0 # a counter that is going to measure the time until the result (webpage) is # returned while 1: data = sock.read(1024) timp=timp+1 if len(data) < 1024: break timpLimita=50000000 * timeout if timp==timpLimita: # 5 millions is about 1 second break if timp==timpLimita: print IPul + ": Connection is working, but the webpage is fetched in more than 50 seconds. This proxy returns the following IP: " + str(data) return str(data) else: print "This proxy " + IPul + "= good proxy. " + "It returns the following IP: " + str(data) return str(data) # Now, I call the function to test it for one single proxy (IP:port) that does not support user and password (a public high anonymity proxy) #(I put a proxy that I know is working - slow, but is working) rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) print rez The error: Traceback (most recent call last): File "./public_html/cgi-bin/teste5.py", line 43, in ? rez=get_source_html_proxy("http://www.whatismyip.com/automation/n09230945.asp", "93.84.221.248:3128", 50) File "./public_html/cgi-bin/teste5.py", line 18, in get_source_html_proxy sock=urllib2.urlopen(req) File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib64/python2.4/urllib2.py", line 358, in open response = self._open(req, data) File "/usr/lib64/python2.4/urllib2.py", line 376, in _open '_open', req) File "/usr/lib64/python2.4/urllib2.py", line 337, in _call_chain result = func(*args) File "/usr/lib64/python2.4/urllib2.py", line 573, in lambda r, proxy=url, type=type, meth=self.proxy_open: \ File "/usr/lib64/python2.4/urllib2.py", line 580, in proxy_open if '@' in host: TypeError: iterable argument required I do not know why the character "@" is an issue (I have no such in my code. Should I have?) Thanks in advance for your valuable help.

    Read the article

  • Custom webserver caching

    - by Mark Kinsella
    I'm working with a custom webserver on an embedded system and having some problems correctly setting my HTTP Headers for caching. Our webserver is generating all dynamic content as XML and we're using semi-static XSL files to display it with some dynamic JSON requests thrown in for good measure along with semi-static images. I say "semi-static" because the problems occur when we need to do a firmware update which might change the XSL and image files. Here's what needs to be done: cache the XSL and image files and do not cache the XML and JSON responses. I have full control over the HTTP response and am currently: Using ETags with the XSL and image files, using the modified time and size to generate the ETag Setting Cache-Control: no-cache on the XML and JSON responses As I said, everything works dandy until a firmware update when the XSL and image files are sometimes cached. I've seen it work fine with the latest versions of Firefox and Safari but have had some problems with IE. I know one solution to this problem would be simply rename the XSL and image files after each version (eg. logo-v1.1.png, logo-v1.2.png) and set the Expires header to a date in the future but this would be difficult with the XSL files and I'd like to avoid this. Note: There is a clock on the unit but requires the user to set it and might not be 100% reliable which is what might be causing my caching issues when using ETags. What's the best practice that I should employ? I'd like to avoid as many webserver requests as possible but invalidating old XSL and image files after a software update is the #1 priority.

    Read the article

  • Powershell - Splitting string into seperate components

    - by TheD
    I am writing a script which will basically do the following: Read from a text file some arguements: DriveLetter ThreeLetterCode ServerName VolumeLetter Integer Eg. W MSS SERVER01 C 1 These values happen to form a folder destination W:\MSS\, and a filename which works in the following naming convention: SERVERNAME_VOLUMELETTER_VOL-b00X-iYYY.spi - Where The X is the Integer above The value Y I need to work out later, as this happens to be the value of the incremental image (backups) and I need to work out the latest incremental. So at the moment -- Count lines in file, and loop for this many lines. $lines = Get-Content -Path PostBackupCheck-Textfile.txt | Measure-Object -Line for ($i=0; $i -le $lines.Lines; $i++) Within this loop I need to do a Get-Content to read off the line I am currently looking at i.e. line 0, line 1, line 2, as there will be multiple lines in the format I wrote at the beginning and split the line into an array, whereby each part of the file, as seen above naming convention, is in a[0], a[1], a[2]. etc The reason for this is because, I need to then sort the folder that contains these, find the latest file, by date, and take the _iXXX.spi part and place this into the array value a[X] so I then have a complete filename to mount. This value will replace iYYY.spi It's a little complex because I also have to make sure when I do a Get-ChildItem with -Include before I sort it all by date, I am only including the filename that matches the arguements fed to it from the text file : So, SERVER01_C_VOL-b001-iYYY.spi and not anything else. i.e. not SERVER01_D_VOL-b001-iYYY.spi Then take the iYYY value from the sort on the Get-ChildItem -Include and place that into the appropriate array item. I've literally no idea where to start, so any ideas are appreciated! Hopefully I've explained in enough detail. I have also placed the code on Pastebin: http://pastebin.com/vtFifTW6 Thanks!

    Read the article

  • Is it possible to do A/B testing by page rather than by individual?

    - by mojones
    Lets say I have a simple ecommerce site that sells 100 different t-shirt designs. I want to do some a/b testing to optimise my sales. Let's say I want to test two different "buy" buttons. Normally, I would use AB testing to randomly assign each visitor to see button A or button B (and try to ensure that that the user experience is consistent by storing that assignment in session, cookies etc). Would it be possible to take a different approach and instead, randomly assign each of my 100 designs to use button A or B, and measure the conversion rate as (number of sales of design n) / (pageviews of design n) This approach would seem to have some advantages; I would not have to worry about keeping the user experience consistent - a given page (e.g. www.example.com/viewdesign?id=6) would always return the same html. If I were to test different prices, it would be far less distressing to the user to see different prices for different designs than different prices for the same design on different computers. I also wonder whether it might be better for SEO - my suspicion is that Google would "prefer" that it always sees the same html when crawling a page. Obviously this approach would only be suitable for a limited number of sites; I was just wondering if anyone has tried it?

    Read the article

  • C: 8x8 -> 16 bit multiply precision guaranteed by integer promotions?

    - by craig-blome
    I'm trying to figure out if the C Standard (C90, though I'm working off Derek Jones' annotated C99 book) guarantees that I will not lose precision multiplying two unsigned 8-bit values and storing to a 16-bit result. An example statement is as follows: unsigned char foo; unsigned int foo_u16 = foo * 10; Our Keil 8051 compiler (v7.50 at present) will generate a MUL AB instruction which stores the MSB in the B register and the LSB in the accumulator. If I cast foo to a unsigned int first: unsigned int foo_u16 = (unsigned int)foo * 10; then the compiler correctly decides I want a unsigned int there and generates an expensive call to a 16x16 bit integer multiply routine. I would like to argue beyond reasonable doubt that this defensive measure is not necessary. As I read the integer promotions described in 6.3.1.1, the effect of the first line shall be as if foo and 10 were promoted to unsigned int, the multiplication performed, and the result stored as unsigned int in foo_u16. If the compiler knows an instruction that does 8x8-16 bit multiplications without loss of precision, so much the better; but the precision is guaranteed. Am I reading this correctly? Best regards, Craig Blome

    Read the article

  • Modify Jquery UI slider control

    - by Jean-Yves Deman
    I use the Jquery UI slider control. My slider has a width of 200px. I would like the center of my left handle is shifted 14px to the right and the center of my right handle is offset to the left of 14px. So, my good slider will more than 172px (200-14-14). The background-image measure good 200px. The length of 14px is calculated as follows: width of the handle (28px) / 2. Can you help me solve this mystery? Thank you in advance. .ui-slider .ui-slider-handle { width:28px; height:28px; background:url(images/bulle.png) no-repeat 0 0; overflow: hidden; position:absolute; margin-left:0px; top: -7px; border-style:none; cursor:help; }? I use "margin-left:0px;" to delete the "margin-left:-14px" by default. But, my right handle spring 28px to the right ! Help me please !

    Read the article

  • Extending the .NET type system so the compiler enforces semantic meaning of primitive values in cert

    - by Drew Noakes
    I'm working with geometry a bit at the moment and am converting a lot between degrees and radians. Unfortunately, both of these are represented by double, so there's compile time warning/error if I try to pass a value in degrees where radians are expected. I believe F# has a compile-time solution for this (called units of measure.) I'd like to do something similar in C#. As another example, imagine a SQL library that accepts various query parameters as strings. It'd be good to have a way of enforcing that only clean strings were allowed to be passed in at runtime, and the only way to get a clean string was to pass through some SQL injection attack preventing logic. The obvious solution is to wrap the double/string/whatever in a new type to give it the type information the compiler needs. I'm curious if anyone has an alternative solution. If you do think wrapping is the only/best way, then please go into some of the downsides of the pattern (and any upsides I haven't mentioned too.) I'm especially concerned about the performance of abstracted primitive numeric types on my calculations at runtime.

    Read the article

  • Is it impossible to secure .net code (intellectual property) ?

    - by JL
    I used to work in JavaScript a lot and one thing that really bothered my employers was that the source code was too easy to steal. Even with obfuscation, nothing really helped, because we all knew that any competent developer would be able to read that code if they wanted to. JS Scripts are one thing, but what about SOA projects that have millions invested in IP (Intellectual Property). I love .net, and especially C#, but I recently again had to answer the question "If we give this compiled program over to our clients, can their developers reverse engineer it?" I had gone out of my way to obfuscate the code, but I knew it wouldn't take that much for another determined C# developer to get at the code. So I earnestly pose the question, is it impossible to secure .net code? The considerations I have as as follows: Even regular native executables can be reversed, but not every developer has the skill to be able to do this. Its a lot harder to disassemble a native executable than a .net assembly. Obfuscation will only get you so far, but it does help a little. Why have I never seen any public acknowledgement by Microsoft that anything written in .net is subject to relatively easy IP theft? Why have I never seen a scrap of counter measure training on any Microsoft site? Why does VS come with a community obfuscater as an optional component? Ok maybe I have just had my head in the sand here, but its not exactly high on most developers priority list. Are there any plans to address my concerns in any future version of .net? I'm not knocking .net, but I would like some realistic answers, thank you, question marked as subjective and community!

    Read the article

  • GROUP BY ID range?

    - by d0ugal
    Given a data set like this; +-----+---------------------+--------+ | id | date | result | +-----+---------------------+--------+ | 121 | 2009-07-11 13:23:24 | -1 | | 122 | 2009-07-11 13:23:24 | -1 | | 123 | 2009-07-11 13:23:24 | -1 | | 124 | 2009-07-11 13:23:24 | -1 | | 125 | 2009-07-11 13:23:24 | -1 | | 126 | 2009-07-11 13:23:24 | -1 | | 127 | 2009-07-11 13:23:24 | -1 | | 128 | 2009-07-11 13:23:24 | -1 | | 129 | 2009-07-11 13:23:24 | -1 | | 130 | 2009-07-11 13:23:24 | -1 | | 131 | 2009-07-11 13:23:24 | -1 | | 132 | 2009-07-11 13:23:24 | -1 | | 133 | 2009-07-11 13:23:24 | -1 | | 134 | 2009-07-11 13:23:24 | -1 | | 135 | 2009-07-11 13:23:24 | -1 | | 136 | 2009-07-11 13:23:24 | -1 | | 137 | 2009-07-11 13:23:24 | -1 | | 138 | 2009-07-11 13:23:24 | 1 | | 139 | 2009-07-11 13:23:24 | 0 | | 140 | 2009-07-11 13:23:24 | -1 | +-----+---------------------+--------+ How would I go about grouping the results by day 5 records at a time. The above results is part of the live data, there is over 100,000 results rows in the table and its growing. Basically I want to measure the change over time, so want to take a SUM of the result every X records. In the real data I'll be doing it ever 100 or 1000 but for the data above perhaps every 5. If i could sort it by date I would do something like this; SELECT DATE_FORMAT(date, '%h%i') ym, COUNT(result) 'Total Games', SUM(result) as 'Score' FROM nn_log GROUP BY ym; I can't figure out a way of doing something similar with numbers. The order is sorted by the date but I hope to split the data up every x results. It's safe to assume there are no blank rows. Doing it above with the data you could do multiple selects like; SELECT SUM(result) FROM table LIMIT 0,5; SELECT SUM(result) FROM table LIMIT 5,5; SELECT SUM(result) FROM table LIMIT 10,5; Thats obviously not a very good way to scale up to a bigger problem. I could just write a loop but I'd like to reduce the number of queries.

    Read the article

  • CSS overflow detection in JavaScript

    - by ring0
    In order to display a line of text (like in a forum), and end that line with "..." if the text would overflow the line (not truncating using the CSS overflow property), there are a number of ways. I'm still seeking the best solution. For instance, Adding in CSS a "..." as a background-image is a possibility, but it would appear all the time, and this is not a good solution. Some sites just count the characters and if over - say - 100, truncates the string to keep only 100 (or 97) chars and add a "..." at the end. But fonts are usually not proportional, so the result is not pretty. For instance the space - in pixels - taken by "AAA" and "iii" is clearly different "AAA" and "iii" via a proportional font have the same width There is another idea to get the exact size in pixels of the string: create in Javascript a DIV insert the text in it (via innerHTML for instance) measure the width (via .offsetWidth) which is not implemented yet. However, I wonder if there could be any browser compatibility problem? Did any one tried this solution? Other recommendations would be welcome.

    Read the article

  • What do you mean by the expressiveness in programming lanuguage?

    - by prosseek
    I see a lot of the word 'expressiveness' when people want to stress one language is better than the other. But I don't see exactly what they mean by it. Is it the verboseness/succinctness? I mean, if one language can write down something shorter than the other, does that mean expressiveness? Please refer to my other question - http://stackoverflow.com/questions/2411772/article-about-code-density-as-a-measure-of-programming-language-power Is it the power of the language? Paul Graham says that one language is more powerful than the other language in a sense that one language can do that the other language can't do (for example, LISP can do something with macro that the other language can't do). Is it just something that makes life easier? Regular expression can be one of the examples. Is it a different way of solving the same problem: something like SQL to solve the search problem? What do you think about the expressiveness of a programming lanuage? Can you show the expressiveness using some code? What's the relationship with the expressiveness and DSL? Do people come up with DSL to get the expressiveness?

    Read the article

  • Why does the '#weight' property sometimes not have any effect in Drupal forms?

    - by Adrian
    Hello, I'm trying to create a node form for a custom type. I have organic groups and taxonomy both enabled, but want their elements to come out in a non-standard order. So I've implemented hook_form_alter and set the #weight property of the og_nodeapi subarray to -1000, but it still goes after taxonomy and menu. I even tried changing the subarray to a fieldset (to force it to actually be rendered), but no dice. I also tried setting $form['taxonomy']['#weight'] = 1000 (I have two vocabs so it's already being rendered as a fieldset) but that didn't work either. I set the weight of my module very high and confirmed in the system table that it is indeed the highest module on the site - so I'm all out of ideas. Any suggestions? Update: While I'm not exactly sure how, I did manage to get the taxonomy fieldset to sink below everything else, but now I have a related problem that's hopefully more manageable to understand. Within the taxonomy fieldset, I have two items (a tags and a multi-select), and I wanted to add some instructions in hook_form_alter as follows: $form['taxonomy']['instructions'] = array( '#value' => "These are the instructions", '#weight' => -1, ); You guessed it, this appears after the terms inserted by the taxonomy module. However, if I change this to a fieldset: $form['taxonomy']['instructions'] = array( '#type' => 'fieldset', // <-- here '#title' => 'Instructions', // <-- and here for good measure '#value' => "These are the instructions", '#weight' => -1, ); then it magically floats to the top as I'd intended. I also tried textarea (this also worked) and explicitly saying markup (this did not). So basically, changing the type from "markup" (the default IIRC) to "fieldset" has the effect of no longer ignoring its weight.

    Read the article

  • one two-directed tcp socket OR two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • Alternative to using c:out to prevent XSS

    - by lynxforest
    I'm working on preventing cross site scripting (XSS) in a Java, Spring based, Web application. I have already implemented a servlet filter similar to this example http://greatwebguy.com/programming/java/simple-cross-site-scripting-xss-servlet-filter/ which sanitizes all the input into the application. As an extra security measure I would like to also sanitize all output of the application in all JSPs. I have done some research to see how this could be done and found two complementary options. One of them is the use of Spring's defaultHtmlEscape attribute. This was very easy to implement (a few lines in web.xml), and it works great when your output is going through one of spring's tags (ie: message, or form tags). The other option I have found is to not directly use EL expressions such as ${...} and instead use <c:out value="${...}" /> That second approach works perfectly, however due to the size of the application I am working on (200+ JSP files). It is a very cumbersome task to have to replace all inappropriate uses of EL expressions with the c:out tag. Also it would become a cumbersome task in the future to make sure all developers stick to this convention of using the c:out tag (not to mention, how much more unreadable the code would be). Is there alternative way to escape the output of EL expressions that would require fewer code modifications? Thank you in advance.

    Read the article

  • one two-directed tcp socket of two one-directed? (linux, high volume, low latency)

    - by osgx
    Hello I need to send (interchange) a high volume of data periodically with the lowest possible latency between 2 machines. The network is rather fast (e.g. 1Gbit or even 2G+). Os is linux. Is it be faster with using 1 tcp socket (for send and recv) or with using 2 uni-directed tcp sockets? The test for this task is very like NetPIPE network benchmark - measure latency and bandwidth for sizes from 2^1 up to 2^13 bytes, each size sent and received 3 times at least (in teal task the number of sends is greater. both processes will be sending and receiving, like ping-pong maybe). The benefit of 2 uni-directed connections come from linux: http://lxr.linux.no/linux+v2.6.18/net/ipv4/tcp_input.c#L3847 3847/* 3848 * TCP receive function for the ESTABLISHED state. 3849 * 3850 * It is split into a fast path and a slow path. The fast path is 3851 * disabled when: ... 3859 * - Data is sent in both directions. Fast path only supports pure senders 3860 * or pure receivers (this means either the sequence number or the ack 3861 * value must stay constant) ... 3863 * 3864 * When these conditions are not satisfied it drops into a standard 3865 * receive procedure patterned after RFC793 to handle all cases. 3866 * The first three cases are guaranteed by proper pred_flags setting, 3867 * the rest is checked inline. Fast processing is turned on in 3868 * tcp_data_queue when everything is OK. All other conditions for disabling fast path is false. And only not-unidirected socket stops kernel from fastpath in receive

    Read the article

  • What is the minimal licensable source code?

    - by Hernán Eche
    Let's suppose I want to "protect" this code about being used without attribution, patenting it, or through any open source licence... #include<stdio.h> int main (void) { int version=2; printf("\r\n.Hello world, ver:(%d).", version); return 0; } It's a little obvious or just a language definition example.. When a source stop being "trivial, banal, commonplace, obvious", and start to be something that you may claim "rights"? Perhaps it depends on who read it, something that could be great geniality for someone that have never programmed, could be just obvious for an expert. It's easy when watching two sources there are 10000 same lines of code, that's a theft.. but that's not always so obvious. How to measure amount of "ownness", it's about creativity? line numbers? complexity? I can't imagine objetive answers for that, only some patches. For example perhaps the complexity, It's not fair to replace "years of engeneering" with "copy and paste". But is there any objetive index for objetive determination of this subject? (In a funny way I imagine this criterion: If the licence is longer than the code, then there is no owner, just to punish not caring storage space and world resources =P)

    Read the article

  • Using JMX classes to notify on events over time

    - by Cincinnati Joe
    I've been looking at JMX for monitoring application and system metrics (partially because MBeans can accessed by various tools such as JConsole). It would seem like the classes included with JMX would be useful for things like notification when metrics have exceeded thresholds. But I'm not sure they fit with the way I want to measure these over a specified time period. For example, let's say I want to notify an admin when the average CPU load is over 95% for more than 5 minutes. Is that something can be done with a GaugeMonitor? From the docs, it doesn't seem quite suited for this, and I'm wondering if instead I should write my own MBean with the necessary logic. A more relevant example is when the login times for users exceed 10s over a period of 5 mins. Slightly different would be the last 20 logins took more than 10s on average. Another case would be when a process crashes 4+ times in an hour. Or the request queue exceeds 15 for 5 mins. Are the JMX Monitor classes useful for this kind of thing?

    Read the article

  • Increasing time resolution of BOOST::progress timer

    - by feelfree
    BOOST::progress_timer is a very useful class to measure the running time of a function. However, the default implementation of progress_timer is not accurate enough and a possible way of increasing time resolution is to reconstruct a new class as the following codes show: #include <boost/progress.hpp> #include <boost/static_assert.hpp> template<int N=2> class new_progress_timer:public boost::timer { public: new_progress_timer(std::ostream &os=std::cout):m_os(os) { BOOST_STATIC_ASSERT(N>=0 &&N<=10); } ~new_progress_timer(void) { try { std::istream::fmtflags old_flags = m_os.setf(std::istream::fixed,std::istream::floatfield); std::streamsize old_prec = m_os.precision(N); m_os<<elapsed()<<"s\n" <<std::endl; m_os.flags(old_flags); m_os.precison(old_prec); } catch(...) { } } private: std::ostream &m_os; }; However, when I compile the codes with VC10, the following error appear: 'precison' : is not a member of 'std::basic_ostream<_Elem,_Traits>' Any ideas? Thanks.

    Read the article

  • ViewGroup{TextView,...}.getMeasuredHeight gives wrong value is smaller than real height.

    - by Dewr
    ViewGroup(the ViewGroup is containing TextViews having long text except line-feed-character).getMeasuredHeight returns wrong value... that is smaller than real height. how to get rid of this problem? here is the java code: ;public static void setListViewHeightBasedOnChildren(ListView listView) { ListAdapter listAdapter = listView.getAdapter(); if (listAdapter == null) { // pre-condition return; } int totalHeight = 0; int count = listAdapter.getCount(); for (int i = 0; i < count; i++) { View listItem = listAdapter.getView(i, null, listView); listItem.measure(View.MeasureSpec.AT_MOST, View.MeasureSpec.UNSPECIFIED); totalHeight += listItem.getMeasuredHeight(); } ViewGroup.LayoutParams params = listView.getLayoutParams(); params.height = totalHeight + (listView.getDividerHeight() * (listAdapter.getCount() - 1)); listView.setLayoutParams(params); } ; and here is the list_item_comments.xml:; {RelativeLayout xmlns:android="http://schemas.android.com/apk/res/android" android:layout_width="fill_parent" android:layout_height="fill_parent" } {RelativeLayout android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_centerVertical="true" android:layout_marginLeft="8dip" android:layout_marginRight="8dip" } {TextView android:id="@+id/list_item_comments_nick" android:layout_width="wrap_content" android:layout_height="wrap_content" android:textSize="16sp" style="@style/WhiteText" /} {TextView android:id="@+id/list_item_comments_time" android:layout_width="wrap_content" android:layout_height="wrap_content" android:layout_alignParentRight="true" android:textSize="16sp" style="@style/WhiteText" /} {TextView android:id="@+id/list_item_comments_content" android:layout_width="fill_parent" android:layout_height="wrap_content" android:layout_below="@id/list_item_comments_nick" android:autoLink="all" android:textSize="24sp" style="@style/WhiteText" /} {/RelativeLayout} {/RelativeLayout} ;

    Read the article

  • SPSS - sum of squares change radically with slight model changes in ANOVA??

    - by Pat
    I have noticed that the sum of squares in my models can change fairly radically with even the slightest adjustment to my models???? Is this normal???? I'm using SPSS 16, and both models presented below used the same data and variables with only one small change - categorizing one of the variables as either a 2 level or 3 level variable. Details - using a 2 x 2 x 6 mixed model ANOVA with the 6 being the repeated measure i get the following in the between group analysis ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 4086.46 | 1 | 4086.46 | 104.93 | .000 X | 224.61 | 1 | 224.61 | 5.77 | .019 Y | 2.60 | 1 | 2.60 | .07 | .80 X by Y | 19.25 | 1 | 19.25 | .49 | .49 Error | 2570.40 | 66 | 38.95 | Then, when I use the exact same data but a slightly different model in which variable Y has 3 levels instead of 2 levels I get the following ------------------------------------------------------------ Source | Type III SS | df | MS | F | Sig ------------------------------------------------------------ intercept | 3603.88 | 1 | 3603.88 | 90.89 | .000 X | 171.89 | 1 | 171.89 | 4.34 | .041 Y | 19.23 | 2 | 9.62 | .24 | .79 X by Y | 17.90 | 2 | 17.90 | .80 | .80 Error | 2537.76 | 64 | 39.65 | I don't understand why variable X would have a different sum of squares simply because variable Y gets devided up into 3 levels instead of 2. This is also the case in the within groups analysis too. Please help me understand :D Thank you in advance Pat

    Read the article

  • Quantifying the amount of change in a git diff?

    - by Alex Feinman
    I use git for a slightly unusual purpose--it stores my text as I write fiction. (I know, I know...geeky.) I am trying to keep track of productivity, and want to measure the degree of difference between subsequent commits. The writer's proxy for "work" is "words written", at least during the creation stage. I can't use straight word count as it ignores editing and compression, both vital parts of writing. I think I want to track: (words added)+(words removed) which will double-count (words changed), but I'm okay with that. It'd be great to type some magic incantation and have git report this distance metric for any two revisions. However, git diffs are patches, which show entire lines even if you've only twiddled one character on the line; I don't want that, especially since my 'lines' are paragraphs. Ideally I'd even be able to specify what I mean by "word" (though \W+ would probably be acceptable). Is there a flag to git-diff to give diffs on a word-by-word basis? Alternately, is there a solution using standard command-line tools to compute the metric above?

    Read the article

  • Can knowing C actually hurt the code you write in higher level languages?

    - by Jurily
    The question seems settled, beaten to death even. Smart people have said smart things on the subject. To be a really good programmer, you need to know C. Or do you? I was enlightened twice this week. The first one made me realize that my assumptions don't go further than my knowledge behind them, and given the complexity of software running on my machine, that's almost non-existent. But what really drove it home was this Slashdot comment: The end result is that I notice the many naive ways in which traditional C "bare metal" programmers assume that higher level languages are implemented. They make bad "optimization" decisions in projects they influence, because they have no idea how a compiler works or how different a good runtime system may be from the naive macro-assembler model they understand. Then it hit me: C is just one more abstraction, like all others. Even the CPU itself is only an abstraction! I've just never seen it break, because I don't have the tools to measure it. I'm confused. Has my mind been mutilated beyond recovery, like Dijkstra said about BASIC? Am I living in a constant state of premature optimization? Is there hope for me, now that I realized I know nothing about anything? Is there anything to know, even? And why is it so fascinating, that everything I've written in the last five years might have been fundamentally wrong? To sum it up: is there any value in knowing more than the API docs tell me? EDIT: Made CW. Of course this also means now you must post examples of the interpreter/runtime optimizing better than we do :)

    Read the article

  • Userdefined margins in WPF printing

    - by MTR
    Most printing samples for WPF go like this: PrintDialog dialog = new PrintDialog(); if (dialog.ShowDialog() == true) { StackPanel myPanel = new StackPanel(); myPanel.Margin = new Thickness(15); Image myImage = new Image(); myImage.Width = dialog.PrintableAreaWidth; myImage.Stretch = Stretch.Uniform; myImage.Source = new BitmapImage(new Uri("pack://application:,,,/Images/picture.bmp")); myPanel.Children.Add(myImage); myPanel.Measure(new Size(dialog.PrintableAreaWidth, dialog.PrintableAreaHeight)); myPanel.Arrange(new Rect(new Point(0, 0), myPanel.DesiredSize)); dialog.PrintVisual(myPanel, "A Great Image."); } What I don't like about this is, that they always set the margin to a fixed value. But in PrintDialog the user has the option to choose a individual margin that no sample cares about. If the user now selects a margin that is larger as the fixed margin set by program, the printout is truncated. Is there a way to get the user selected margin value from PrintDialog? TIA Michael

    Read the article

  • CSS to specify positions over a scanned document

    - by itsols
    I'm trying to write the CSS rules to position text over a scanned document. Reason: The document is a pre-printed form. I am trying to position the text on-screen so that it relates to the 'spaces' on the actual form. Issue: Although I position the values using centimeters, they don't seem to get aligned with the ones on the actual page. I can see this misalignment since my scanned image is in the background of the page. What I've tried: I used a ruler to physically measure the locations and specify them with CSS. But on-screen, it doesn't tally. I used the scanned image to position the CSS values. Then the printout is not correct. I even scaled the scanned page using Inkscape to the exact dimensions in centimeters and took into account all margins, etc... What I need: I am trying to correctly show the output values on-screen AND have them print in the correct manner as well. I know that using two CSS sheets (one for print) is an option. But I'm developing this program away from where the actual printing is to be done. So is there a convenient way of matching the exact screen locations with those on the actual/final prinout? Thanks!

    Read the article

< Previous Page | 35 36 37 38 39 40 41 42 43 44 45 46  | Next Page >