Search Results

Search found 21838 results on 874 pages for 'long double'.

Page 7/874 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • long polling vs streaming for about 1 update/second

    - by jcee14
    is streaming a viable option? will there be a performance difference on the server end depending on which i choose? is one better than the other for this case? I am working on a GWT application with Tomcat running on the server end. To understand my needs, imagine updating the stock prices of several stocks concurrently.

    Read the article

  • java converting int to short

    - by changed
    Hi I am calculating 16 bit checksum on my data which i need to send to server where it has to recalculate and match with the provided checksum. Checksum value that i am getting is in int but i have only 2 bytes for sending the value.So i am casting int to short while calling shortToBytes method. This works fine till checksum value is less than 32767 thereafter i am getting negative values. Thing is java does not have unsigned primitives, so i am not able to send values greater than max value of signed short allowed. How can i do this, converting int to short and send over the network without worrying about truncation and signed & unsigned int. Also on both the side i have java program running. private byte[] shortToBytes(short sh) { byte[] baValue = new byte[2]; ByteBuffer buf = ByteBuffer.wrap(baValue); return buf.putShort(sh).array(); } private short bytesToShort(byte[] buf, int offset) { byte[] baValue = new byte[2]; System.arraycopy(buf, offset, baValue, 0, 2); return ByteBuffer.wrap(baValue).getShort(); }

    Read the article

  • perl Getopt::Long madness

    - by ennuikiller
    The following code works in one script yet in another only works if a specify the "--" end of options flag before specifying an option: my $opt; GetOptions( 'help|h' => sub { usage("you want help?? hahaha, hopefully your not serious!!"); }, 'file|f=s' => \$opt->{FILE}, 'report|r' => \$opt->{REPORT}, ) or usage("Bad Options"); In other words, the same code words in good.pl and bad.pl like so: good.pl -f bad.pl -- -f If I try bad.pl -f I get "unknown option:f" Anyone have any clue as to what can cause this behavior? Thanks in advnace!

    Read the article

  • Why unsigned int contained negative number

    - by Daziplqa
    Hi All, I am new to C, What I know about unsigned numerics (unsigned short, int and longs), that It contains positive numbers only, but the following simple program successfully assigned a negative number to an unsigned int: 1 /* 2 * ===================================================================================== 3 * 4 * Filename: prog4.c 5 * 6 * ===================================================================================== 7 */ 8 9 #include <stdio.h> 10 11 int main(void){ 12 13 int v1 =0, v2=0; 14 unsigned int sum; 15 16 v1 = 10; 17 v2 = 20; 18 19 sum = v1 - v2; 20 21 printf("The subtraction of %i from %i is %i \n" , v1, v2, sum); 22 23 return 0; 24 } The output is : The subtraction of 10 from 20 is -10

    Read the article

  • Using WCF to expose underlying process

    - by Steven
    I think I must be a little dull because I'm having so much difficulty with this. I use WCF for pretty much everything in-house, it's the most appropriate technology. I have a new Silverlight 3 app that is connecting to the WCF service and that's working fine. Where the problem begins is: Because of the expense in creating the objects within this service and the high correlation of individual objects being shared between clients I want to have a console application that basically gathers/calculates/caches all the data for the service 24/7 and the service basically connects to the console app (or whatever it is) and gets the pre-processed data. eg, think of it in terms of a stock reporting app (which it is). Person A has a portfolio of x, y z Person B has a portfolio of x, q, z, r The service needs to provide updated metrics on how their portfolio is performing. So instead of every 1 second processing person A, then person B, the app independently gathers the stock price and persons position information into memory and the service just queries the in memory result. Thanks for your help, I really am feeling dumb right now.

    Read the article

  • How Long Can Same-Page Anchor Links (#) Be?

    - by Volomike
    What is the maximum number of characters that a same-page anchor tag link can be on all mainstream platform browsers released from IE6 on up? For instance, a link like: http://example.com/#a789c4d8ecb0ec2201444bfa64b04696aa2bbaa41eb331535d1dd6d219558a02968d5af97ae74359973163337ef9b09c65dd70d40c3c79a4169355ea92db45e21fe30550dce4987987237652a347b97759f2753b412ee50d4121d0f6382580b5a62d1e02921c39c252c5e4731e38fc295ad6abcb22613513c4fd7599ab10d3f9c970b9eb3ddf5b2cf233af25005298590ce798b28092cecdc6756c8205e9a0650826e42a184267d0bfb5e3d7b3d1c25e324fe6329cf7681ffae7c01c86d4a70 Note the # symbol above. Note I'm using XHTML transitional, if that matters any.

    Read the article

  • jquery .load taking too long after successive calls

    - by user560079
    I have the following jquery script: <script> $(function(){ $('#menu-change-div').on('click', 'a.change-content', function(e){ e.preventDefault() $("#content").load($(this).attr("href")); }); }); </script> It dynamically loads content into my content div depending on which link they clicked in the menu. The problem I am having is when I click multiple links, say 5-10 in a row, the load time goes from instantly to taking 10 seconds or more to not even loading. Is there something in my function that is causing this? Thanks

    Read the article

  • SQLite DB open time really long Problem

    - by sxingfeng
    I am using sqlite in c++ windows, And I have a db size about 60M, When I open the sqlite db, It takes about 13 second. sqlite3* mpDB; nRet = sqlite3_open16(szFile, &mpDB); And if I closed my application and reopen it again. It takse only less then 1 second. First, I thought It is because of disk cache. So I preload the 60M db file before sqlite open, and read the file using CFile, However, after preloading, the first time is still very slow. BOOL CQFilePro::PreLoad(const CString& strPath) { boost::shared_array<BYTE> temp = boost::shared_array<BYTE>(new BYTE[PRE_LOAD_BUFFER_LENGTH]); int nReadLength; try { CFile file; if (file.Open(strPath, CFile::modeRead) == FALSE) { return FALSE; } do { nReadLength = file.Read(temp.get(), PRE_LOAD_BUFFER_LENGTH); } while (nReadLength == PRE_LOAD_BUFFER_LENGTH); file.Close(); } catch(...) { } return TRUE; } My question is what is the difference between first open and second open. How can I accelerate the sqlite open-process.

    Read the article

  • Convert List to Double Array by LINQ (C#3.0)

    - by Newbie
    I have two lists x1 and x2 of type double List<double> x1 = new List<double> { 0.0330, -0.6463}; List<double> x2 = new List<double> { -0.2718, -0.2240}; I am using the below function to convert it to double array List<List<double>> xData = new List<List<double>> { x1, x2 }; double[,] xArray = new double[xData.Count, xData[0].Count]; for (int i = 0; i < xData.Count; i++) { for (int j = 0; j < xData[0].Count; j++) { xArray[i, j] = xData[i][j]; } } Is it possible to do the same stuff (i.e. the function that is converting List to array) by using Linq. Using : (C#3.0) & Framework - 3.5 I want to do this by using Linq because I am learning it but not have sufficient knowledge to write that. Thanks Convert List to Double Array by LINQ (C#3.0)

    Read the article

  • SEO - How to Optimise For Long-Tail Queries

    There is a great deal of value in the long-tail of search. The long-tail is basically a query that is over three or four keywords long. Good examples of long-tail queries include "cheap flights to Japan May" or "buy back doors UK." Both of these terms exhibit a great deal of user intent - this means the users behind both terms are very far down the buying cycle and are looking for a website on which they can transact and buy a flight to Japan or purchase a back door.

    Read the article

  • Has form post behavior changed in modern browsers? (or How are double clicks handled by the browser)

    - by Alex Czarto
    Background: We are in the process of writing a registration/payment page, and our philosophy was to code all validation and error checking on the server side first, and then add client side validation as a second step (un-obstructive jQuery). We wanted to disable double clicks server side, so we wrote some locking, thread-safe code to handle simultaneous posts/race conditions. When we tried to test this, we realized that we could not cause a simultaneous post or race condition to occur. I thought that (in older browsers anyway) double clicking a submit button worked as follows: User double clicks submit button. Browser sends a post on the first click On the second click, browser cancels/ignores initial post, and initiates a second post (before the first post has returned with a response). Browser waits for second post to return, ignoring initial post response. I thought that from the server side it looked like this: Server gets two simultaneous post requests, executes and responds to them both (unaware that no one is listening to the first response). From our testing (FireFox 3.0, IE 8.0) this is what actually happens: User double clicks submit button Browser sends a post for the first click Browser queues up second click, but waits for the response from the first click. Response returns from first click (response is ignored?). Browser sends a post for the second click. So from a server side: Server receives a single post which it executes and responds to. Then, server receives a second request wich it executes and responds to. My question is, has this always worked this way (and I'm losing my mind)? Or is this a new feature in modern browsers that prevents simultaneous posts to be sent to the server? It seems that for server side double click prevention, we don't have to worry about simultaneous posts or race conditions. Only need to worry about queued up posts. Thanks in advance for any feedback / comments. Alex

    Read the article

  • Properly force SSL with .htaccess, no double authentication

    - by cwd
    I'm trying to force SSL with .htaccess on a shared host. This means there I only have access to .htaccess and not the vhosts config. I know you can put a rule in the VirtualHost config file to force SSL which will be picked up there (and acted upon first), preventing double authentication, but I can't get to that. Here's the progress I've made: Config 1 This works pretty well but it does force double authentication if you visit http://site.com - once for http and then once for https. Once you are logged in, it automatically redirects http://site.com/page1.html to the https coutnerpart just fine: RewriteEngine On RewriteCond %{HTTPS} !=on RewriteRule ^ https://%{HTTP_HOST}%{REQUEST_URI} [L,R=301] RewriteEngine on RewriteCond %{HTTP_HOST} !(^www\.site\.com*)$ RewriteRule (.*) https://www.site.com$1 [R=301,L] AuthName "Locked" AuthUserFile "/home/.htpasswd" AuthType Basic require valid-user Config 2 If I add this to the top of the file, it works a lot better in that it will switch to SSL before prompting for the password: SSLOptions +StrictRequire SSLRequireSSL SSLRequire %{HTTP_HOST} eq "site.com" ErrorDocument 403 https://site.com It's clever how it will use the SSLRequireSSL option and the ErrorDocument403 to redirect to the secure version of the site. My only complaint is that if you try and access http://site.com/page1.html it will redirect to https://site.com/ So it is forcing SSL without a double-login, but it is not properly forwarding non-SSL resources to their SSL counterparts. Regarding the first config, Insyte mentioned "using mod_rewrite to perform a simple redirect is a bit of overkill. Use the Redirect directive instead. It's possible this may even fix your problem, as I believe mod_rewrite rules are some of the last directives to be processed, just before the file is actually grabbed from the filesystem" I have not had no such luck on finding a force-ssl config option with the redirect directive and so have been unable to test this theory.

    Read the article

  • Double audio cd ripping weirdness

    - by jqno
    Since I installed Ubuntu 12.04, Rhythmbox, Banshee and Sound Juicer have started acting weird around double cd's, and specifically, cd #2 of said double cd. Sometimes, they will show the information of cd #1. Track names, durations, and even count are incorrect. Sometimes, they will first show the tracks for cd #1, then continue onto cd #2 if cd #2 has more tracks than #1. Sound Juicer seems to be unable to find any track durations at all, even for single cd's. Obviously, this is a pain when I'm trying to rip double cd's. And I have a fair number of them, which I want to rip. This happens on both my machines (a slightly aging iMac, and a 1-year-old Sony Vaio). However, on previous versions of Ubuntu, this never happened. All on the same machines. So I suspect 12.04 is using a different lib for extracting audio cd data. Just for kicks, I tried with Linux Mint 13, and there it works correctly, even though it claims to be based on Ubuntu 12.04 and therefore should be using (partially) the same software. So if the Mint guys can fix it, I should be able to do it too, right? So, my question: what changed in 12.04 that could cause this? And more importantly: what can I do to fix it?

    Read the article

  • Are long methods always bad?

    - by wobbily_col
    So looking around earlier I noticed some comments about long methods being bad practice. I am not sure I always agree that long methods are bad (and would like opinions from others). For example I have some Django views that do a bit of processing of the objects before sending them to the view, a long method being 350 lines of code. I have my code written so that it deals with the paramaters - sorting / filtering the queryset, then bit by bit does some processing on the objects my query has returned. So the processing is mainly conditional aggregation, that has complex enough rules it can't easily be done in the database, so I have some variables declared outside the main loop then get altered during the loop. varaible_1 = 0 variable_2 = 0 for object in queryset : if object.condition_condition_a and variable_2 > 0 : variable 1+= 1 ..... ... . more conditions to alter the variables return queryset, and context So according to the theory I should factor out all the code into smaller methods, so That I have the view method as being maximum one page long. However having worked on various code bases in the past, I sometimes find it makes the code less readable, when you need to constantly jump from one method to the next figuring out all the parts of it, while keeping the outermost method in your head. I find that having a long method that is well formatted, you can see the logic more easily, as it isn't getting hidden away in inner methods. I could factor out the code into smaller methods, but often there is is an inner loop being used for two or three things, so it would result in more complex code, or methods that don't do one thing but two or three (alternatively I could repeat inner loops for each task, but then there will be a performance hit). So is there a case that long methods are not always bad? Is there always a case for writing methods, when they will only be used in one place?

    Read the article

  • return an address of a double

    - by bks
    i'm having an issue understanding why the following works: void doubleAddr(double* source, double** dest) { *dest = source; } i get a pointer to a double and want to change the double that dest points to: //usage: int main() { double* num; double* dest; doubleAddr(num, &dest); return 0; } thanks in advance

    Read the article

  • Double Click to open Office docs is slow, File -> Open is fast.

    - by Keith
    I have 2 unique networks. They both share similar architecture: Windows 2003 SBS SP2 Running Symantec Endpoint Running Symantec Information Foundation Shared drives off a data partition Clients running Office 2003 or 2007 Connect to file server through mapped drives When users try to open a file from their local PC by double clicking, it will take 30-60 seconds to open. When they do File - Open, those same documents open up almost immediately. So far I've tried the following - CCleaner to parse the registry of outdated mapped drives - Disabled "using DDE" - Disabled A/V - Reboot Any ideas beyond that? Figured this question belongs here instead of SU since its the same issue on different networks.

    Read the article

  • java number exceeds long.max_value - how to detect?

    - by jurchiks
    I'm having problems detecting if a sum/multiplication of two numbers exceeds the maximum value of a long integer. Example code: long a = 2 * Long.MAX_VALUE; System.out.println("long.max * smth > long.max... or is it? a=" + a); This gives me -2, while I would expect it to throw a NumberFormatException... Is there a simple way of making this work? Because I have some code that does multiplications in nested IF blocks or additions in a loop and I would hate to add more IFs to each IF or inside the loop. Edit: oh well, it seems that this answer from another question is the most appropriate for what I need: http://stackoverflow.com/a/9057367/540394 I don't want to do boxing/unboxing as it adds unnecassary overhead, and this way is very short, which is a huge plus to me. I'll just write two short functions to do these checks and return the min or max long. Edit2: here's the function for limiting a long to its min/max value according to the answer I linked to above: /** * @param a : one of the two numbers added/multiplied * @param b : the other of the two numbers * @param c : the result of the addition/multiplication * @return the minimum or maximum value of a long integer if addition/multiplication of a and b is less than Long.MIN_VALUE or more than Long.MAX_VALUE */ public static long limitLong(long a, long b, long c) { return (((a > 0) && (b > 0) && (c <= 0)) ? Long.MAX_VALUE : (((a < 0) && (b < 0) && (c >= 0)) ? Long.MIN_VALUE : c)); } Tell me if you think this is wrong.

    Read the article

  • Problem in List<double[,]>

    - by Newbie
    What is wrong with this (in C# 3.0): List<double> x = new List<double> { 0.0330, -0.6463, 0.1226, -0.3304, 0.4764, -0.4159, 0.4209, -0.4070, -0.2090, -0.2718, -0.2240, -0.1275, -0.0810, 0.0349, -0.5067, 0.0094, -0.4404, -0.1212 }; List<double> y = new List<double> { 0.4807, -3.7070, -4.5582, -11.2126, -0.7733, 3.7269, 2.7672, 8.3333, 4.7023,0,0,0,0,0,0,0,0,0 }; List<double[,]> z = new List<double[,]>{x,y}; // this line The error produced is: Error: Argument '1': cannot convert from 'System.Collections.Generic.List<double>' to 'double[*,*]' Help needed.

    Read the article

  • Convert list to double[][] in C#3.0

    - by Newbie
    Given List<double> x1 = new List<double> { -0.2718, -0.2240, -0.1275, -0.0810, 0.0349, -0.5067, 0.0094, -0.4404, -0.1212 }; List<double> x2 = new List<double> { 0.0330, -0.6463, 0.1226, -0.3304, 0.4764, -0.4159, 0.4209, -0.4070, -0.2090 }; How can I make as double[][] X at runtime(programatically). I mean to say if the output I get if I run double[][] X = { new double[] { -0.2718, -0.2240, -0.1275, -0.0810, 0.0349, -0.5067, 0.0094, -0.4404, -0.1212 }, new double[] { 0.0330, -0.6463, 0.1226, -0.3304, 0.4764, -0.4159, 0.4209, -0.4070, -0.2090 } }; Using C#3.0 Thanks

    Read the article

  • The perils of double-dash comments [T-SQL]

    - by jamiet
    I was checking my Twitter feed on my way in to work this morning and was alerted to an interesting blog post by Valentino Vranken that highlights a problem regarding the OLE DB Source in SSIS. In short, using double-dash comments in SQL statements within the OLE DB Source can cause unexpected results. It really is quite an important read if you’re developing SSIS packages so head over to SSIS OLE DB Source, Parameters And Comments: A Dangerous Mix! and be educated. Note that the problem is solved in SSIS2012 and Valentino explains exactly why. If reading Valentino’s post has switched your brain into “learn mode” perhaps also check out my post SSIS: SELECT *... or select from a dropdown in an OLE DB Source component? which highlights another issue to be aware of when using the OLE DB Source. As I was reading Valentino’s post I was reminded of a slidedeck by Chris Adkin entitled T-SQL Coding Guidelines where he recommends never using double-dash comments: That’s good advice! @Jamiet

    Read the article

  • Double-click instead of single-click in Ubuntu 12.04

    - by evfwcqcg
    When I do a single click, my computer (with Ubuntu 12.04) acts like it was double click. I think it happens in ~50% of cases. It happens with all the program I use: browsers, file manager, terminal and so on. I'm not sure when exactly it started, maybe a week ago, and I don't remember if it started to happen after system-update or after the installation of some packages. What I tried: change mouse change mouse settings like Double-Click Timeout None of those helped me. Any ideas? Thanks.

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >