Search Results

Search found 8268 results on 331 pages for 'difference'.

Page 126/331 | < Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >

  • Why did the team at LMAX use Java and design the architecture to avoid GC at all cost?

    - by kadaj
    Why did the team at LMAX design the LMAX Disruptor in Java but all their design points to minimizing GC use? If one does not want to have GC run then why use a garbage collected language? Their optimizations, the level of hardware knowledge and the thought they put are just awesome but why Java? I'm not against Java or anything, but why a GC language? Why not use something like D or any other language without GC but allows efficient code? Is it that the team is most familiar with Java or does Java possess some unique advantage that I am not seeing? Say they develop it using D with manual memory management, what would be the difference? They would have to think low level (which they already are), but they can squeeze the best performance out of the system as it's native.

    Read the article

  • Boot time seems unusually long on MSI GX660R (bootchart included)

    - by Sman789
    After upgrading (clean install) to Ubuntu 12.04, the speed issue when running programs has reduced on my MSI GX660R laptop. However, the boot time is still much longer (over a minute, even after BIOS) than on the many less powerful laptops I have encountered running the same OS, and I was wondering if anyone could help me improve it. I use the FGLRX driver, if that makes any difference. I have uploaded a boot chart, it can be found here http://imageshack.us/photo/my-images/4/bootchartl.png/ As you can see, the boot time is over a minute even after BIOS. A 'designed for Vista' laptop from ages ago which I installed Ubuntu on boots in around thirty seconds, so I think it's a bit strange. Output of dmesg: http://paste.ubuntu.com/1081359/ Output of /var/log/kern.log : http://paste.ubuntu.com/1081363/ Output of /var/log/syslog : http://paste.ubuntu.com/1081365/

    Read the article

  • How can I estimate the entropy of a password?

    - by Wug
    Having read various resources about password strength I'm trying to create an algorithm that will provide a rough estimation of how much entropy a password has. I'm trying to create an algorithm that's as comprehensive as possible. At this point I only have pseudocode, but the algorithm covers the following: password length repeated characters patterns (logical) different character spaces (LC, UC, Numeric, Special, Extended) dictionary attacks It does NOT cover the following, and SHOULD cover it WELL (though not perfectly): ordering (passwords can be strictly ordered by output of this algorithm) patterns (spatial) Can anyone provide some insight on what this algorithm might be weak to? Specifically, can anyone think of situations where feeding a password to the algorithm would OVERESTIMATE its strength? Underestimations are less of an issue. The algorithm: // the password to test password = ? length = length(password) // unique character counts from password (duplicates discarded) uqlca = number of unique lowercase alphabetic characters in password uquca = number of uppercase alphabetic characters uqd = number of unique digits uqsp = number of unique special characters (anything with a key on the keyboard) uqxc = number of unique special special characters (alt codes, extended-ascii stuff) // algorithm parameters, total sizes of alphabet spaces Nlca = total possible number of lowercase letters (26) Nuca = total uppercase letters (26) Nd = total digits (10) Nsp = total special characters (32 or something) Nxc = total extended ascii characters that dont fit into other categorys (idk, 50?) // algorithm parameters, pw strength growth rates as percentages (per character) flca = entropy growth factor for lowercase letters (.25 is probably a good value) fuca = EGF for uppercase letters (.4 is probably good) fd = EGF for digits (.4 is probably good) fsp = EGF for special chars (.5 is probably good) fxc = EGF for extended ascii chars (.75 is probably good) // repetition factors. few unique letters == low factor, many unique == high rflca = (1 - (1 - flca) ^ uqlca) rfuca = (1 - (1 - fuca) ^ uquca) rfd = (1 - (1 - fd ) ^ uqd ) rfsp = (1 - (1 - fsp ) ^ uqsp ) rfxc = (1 - (1 - fxc ) ^ uqxc ) // digit strengths strength = ( rflca * Nlca + rfuca * Nuca + rfd * Nd + rfsp * Nsp + rfxc * Nxc ) ^ length entropybits = log_base_2(strength) A few inputs and their desired and actual entropy_bits outputs: INPUT DESIRED ACTUAL aaa very pathetic 8.1 aaaaaaaaa pathetic 24.7 abcdefghi weak 31.2 H0ley$Mol3y_ strong 72.2 s^fU¬5ü;y34G< wtf 88.9 [a^36]* pathetic 97.2 [a^20]A[a^15]* strong 146.8 xkcd1** medium 79.3 xkcd2** wtf 160.5 * these 2 passwords use shortened notation, where [a^N] expands to N a's. ** xkcd1 = "Tr0ub4dor&3", xkcd2 = "correct horse battery staple" The algorithm does realize (correctly) that increasing the alphabet size (even by one digit) vastly strengthens long passwords, as shown by the difference in entropy_bits for the 6th and 7th passwords, which both consist of 36 a's, but the second's 21st a is capitalized. However, they do not account for the fact that having a password of 36 a's is not a good idea, it's easily broken with a weak password cracker (and anyone who watches you type it will see it) and the algorithm doesn't reflect that. It does, however, reflect the fact that xkcd1 is a weak password compared to xkcd2, despite having greater complexity density (is this even a thing?). How can I improve this algorithm? Addendum 1 Dictionary attacks and pattern based attacks seem to be the big thing, so I'll take a stab at addressing those. I could perform a comprehensive search through the password for words from a word list and replace words with tokens unique to the words they represent. Word-tokens would then be treated as characters and have their own weight system, and would add their own weights to the password. I'd need a few new algorithm parameters (I'll call them lw, Nw ~= 2^11, fw ~= .5, and rfw) and I'd factor the weight into the password as I would any of the other weights. This word search could be specially modified to match both lowercase and uppercase letters as well as common character substitutions, like that of E with 3. If I didn't add extra weight to such matched words, the algorithm would underestimate their strength by a bit or two per word, which is OK. Otherwise, a general rule would be, for each non-perfect character match, give the word a bonus bit. I could then perform simple pattern checks, such as searches for runs of repeated characters and derivative tests (take the difference between each character), which would identify patterns such as 'aaaaa' and '12345', and replace each detected pattern with a pattern token, unique to the pattern and length. The algorithmic parameters (specifically, entropy per pattern) could be generated on the fly based on the pattern. At this point, I'd take the length of the password. Each word token and pattern token would count as one character; each token would replace the characters they symbolically represented. I made up some sort of pattern notation, but it includes the pattern length l, the pattern order o, and the base element b. This information could be used to compute some arbitrary weight for each pattern. I'd do something better in actual code. Modified Example: Password: 1234kitty$$$$$herpderp Tokenized: 1 2 3 4 k i t t y $ $ $ $ $ h e r p d e r p Words Filtered: 1 2 3 4 @W5783 $ $ $ $ $ @W9001 @W9002 Patterns Filtered: @P[l=4,o=1,b='1'] @W5783 @P[l=5,o=0,b='$'] @W9001 @W9002 Breakdown: 3 small, unique words and 2 patterns Entropy: about 45 bits, as per modified algorithm Password: correcthorsebatterystaple Tokenized: c o r r e c t h o r s e b a t t e r y s t a p l e Words Filtered: @W6783 @W7923 @W1535 @W2285 Breakdown: 4 small, unique words and no patterns Entropy: 43 bits, as per modified algorithm The exact semantics of how entropy is calculated from patterns is up for discussion. I was thinking something like: entropy(b) * l * (o + 1) // o will be either zero or one The modified algorithm would find flaws with and reduce the strength of each password in the original table, with the exception of s^fU¬5ü;y34G<, which contains no words or patterns.

    Read the article

  • Nice function for "rolling score up"?

    - by bobobobo
    I'm adding to the player's score, and I'm using a per-frame formula like: int score, displayedScore ;// score is ACTUAL score player has, // displayedScore is what is shown this frame to the player // (the creeping/"rolling" number) float disparity = score - displayedScore ; int d = disparity * .1f ; // add 1/10 of the difference, if( !d ) d = signum( disparity ) ; // last 10 go by 1's score += d ; Where inline int signum( float val ){ if( val > 0 ) return 1 ; else if( val < 0 ) return -1 ; else return 0 ; } So, it kind of works where it makes big changes rapidly, then it creeps in the last few one at a time. But I'm looking for better (or possibly well known?) score-creeping functions. Any one?

    Read the article

  • How do I increase the touchpad sensitivity on Acer Aspire One 532h?

    - by Yossi Farjoun
    I got a cheap netbook and put ubuntu on it without even booting into the windows it came with. I am now slightly regretting it since the trackpad is very annoying, it only registers when I press quite hard on it, and even then the motion is so slow that I must drag me poor finger 3 times to get the pointer to move up or down that tiny screen. OK. enough rant. now business: I went into the setting and increase sensitivity and acceleration to high. No difference. the behaviour did not change at all. So now my questions are: Is this a hardware problem? Is there a program I can run to see what input the trackpad is receiving? so I know if it can, in theory, read the light touch and not only the heavy, sandpaper-your-fingertips-off touch? is there some manual setting that the system might not be setting correctly and which I could change from the terminal?

    Read the article

  • What You Said: The First Things to Do After Installing a New OS

    - by Jason Fitzpatrick
    Earlier this week we asked you to share the steps you went through after installing a new operating system. You responded and we rounded up your responses. Our Ask the Readers series gives you, the awesome How-To Geek reader, a chance to share your tips, trick, and technological know-how with your fellow readers right on the front page. Every week we ask a question and every week we round up your tips to share. This week we’re taking a look at your tips and tricks from What’s the First Thing You Do After Installing a New OS.HTG Explains: What’s the Difference Between the Windows 7 HomeGroups and XP-style Networking?Internet Explorer 9 Released: Here’s What You Need To KnowHTG Explains: How Does Email Work?

    Read the article

  • Callbacks: when to return value, and when to modify parameter?

    - by MarkN
    When writing a callback, when is best to have the callback return a value, and when is it best to have the callback modify a parameter? Is there a difference? For example, if we wanted to grab a list of dependencies, when would you do this: function GetDependencies(){ return [{"Dep1" : 1.1}, {"Dept2": 1.2}, {"Dep3" : 1.3}]; } And when would you do this? function RegisterDependencies(register){ register.add("Dep1", 1.1); register.add("Dep2", 1.2); register.add("Dep3", 1.3); }

    Read the article

  • Visual Studio 2010 - Faster Startup with /nosplash

    - by MikeParks
    I read a blog the other day about the /nosplash switch in Visual Studio. Apparently, it's been around a while and I'm a little late on finding out about it. I figured I'd share it for those of you that come across this and didn't know about it as well. Basically all it does is turn off the Visual Studio splash screen which speeds up initial startup time. It's not a big difference but every little bit helps. I choose speed over looks. You can set the switch by right clicking on Visual Studio, selecting Properties, and adding "/nosplash" on the target Property, so it will look like this: "C:\Program Files\Microsoft Visual Studio 10.0\Common7\IDE\devenv.exe" /nosplash. Feel free to try it out and see if you like it. You can always change it back by just removing the /nosplash switch when you're done testing. There are plenty more Visual Studio switches out there but this is the main one that came in handy for me.   - Mike

    Read the article

  • css - use universal '*' selector vs. html or body selector?

    - by Michael Durrant
    Applying styles to the body tag will be applied to the whole page, so body { font-family: Verdana } will be applied to the whole page. This could also be done with * {font-family: Verdana} which would apply to all elements and so would seem to have the same effect. I understand the principle that in the first instance the style is being applied to one tag, body for the whole page whereas in the second example the font is being applied against each individual html elements. What I am asking is what is the practical difference in doing that, what are the implications and what is a reason, situation or best practice that leads to using one over another. One side-effect is certainly speed (+1 Rob). I am most interested in the actual reason to choose one over the other in terms of functionality.

    Read the article

  • unit testing on ARM

    - by NomadAlien
    We are developing application level code that runs on an ARM processor. The BSP (low level code) is being delivered by a 3d party so our code sits just on top of this abstraction layer (code is written in c++). To do unit testing, I assume we will have to mock/stub out the BSP library(essentially abstracting out the HW), but what I'm not sure of is if I write/run the unit test on my pc, do I compile it with for example GCC? Normally we use Realview compiler to compile our code for the ARM. Can I assume that if I compile and run the code with x86 compiler and the unit tests pass that it will also pass when compiled with RealView compiler? I'm not sure how much difference the compiler makes and if you can trust that if the x86 compiled code pass the unit tests that you can also be confident that the Realview compiled code is ok.

    Read the article

  • Shouldn't all source code be plain text? [on hold]

    - by user61852
    Some developing environment/languages save the source code you write in a binary/propietary format that you cannot see or edit with a generic text editor. I'm not talking about compiled code, but the source code. An example could be PowerBuilder and Oracle Forms. It's ok you use proprietary technology if you want, but not being able to open the source code you wrote, in a simple editor, if only to read it, seems like a very strict form of vendor lock-in. Also this prevents you from using text-based version controls that can show you the difference between two versions in a line-by-line base. If the code is plain text, you don't need a license in order to just open it, see it and learn from it. Should it be a golden rule to avoid vendor lock-in to avoid technologies that save your source code to anything but plain text files ?

    Read the article

  • How can I print-screen just one window and not my entire desktop?

    - by Michael Durrant
    I could swear I've always been able to do alt-printscreen to get 'just that window' but right now I am getting my entire desktop. Any idea why this would be or what I can do to get my ability to do small window screen shots? I've tried a lot of combination with the ctrl, alt, shift and print-screen keys but no luck, nothing happens in response. One option: shift-ctrl print-screen lets me do a selection using a cross hair to size out the screen capture but I don't know where this gets saved. I'm not being given the choice and it's not in Desktop or Pictures. I use an external keyboard, but I've tried using the laptop's own keyboard and no difference. I am running Ubuntu 12.04 and the laptop is a Samsung Ultrabook 900 Update: I rebooted and it "fixed" it - for now. However this is not the first time I've seen this so I'm still curious as to why it happens, what I can do to fix w/o reboot and if other share the same problem.

    Read the article

  • exchange live feed with pre-recorded video for wireless internet camera to router

    - by nate
    I wasn't sure if this should be asked in Web Applications, or Network Engineering, or what... Long story short, I have a video camera with mic that is wirelessly connected to a router (NETGEAR R6200), which can then be viewed through an online service. I would like to be able to somehow exchange the live feed with a pre-recorded video, or image, preferably with pre-recorded sound (the sound of silence would be easiest). Can I place this inbetween the camera and the router, do I need to redirect the camera feed to my laptop first, and then push out the fake video/audio onto the router, without the service knowing the difference? Thanks much and I hope this is well understood!

    Read the article

  • infer half vector length in BRDF

    - by cician
    it's my first question on stack. Is it possible to infer length of the half angle vector for specular lighting from N·L and N·V without the whole view and light vectors? I may be completely off-track, but I have this gut feeling it's possible... Why? I'm working on a skin shader and I'm already doing one texture lookup with N·L+N·E and one texture lookup for specular with N·H+N·V. The latter one can be transformed into N·L+N·E lookup if only I had the half vector length. Doing so could simplify the shader a bit and move some operations into the pre-computed lookup texture. It would make a huge difference since I'm trying to squeeze as much functionality as possible to a single pass mobile version so instruction count matters. Thanks.

    Read the article

  • big lie about programmer [closed]

    - by gcc
    About computer engineering/ computer science, Can you give me big lie ? ex : - There is no need to attend school ( study computer engineering ). Because every one can code ( write program ) - Programmer cannot do web design , they can only write code - there are no difference between software engineer and coder EDIT: A lie is a type of deception in the form of an untruthful statement, especially with the intention to deceive others. Why someone try to deceive other people especially customer ? I think they try to prove they are real computer engineer without having diploma in computer science. If you look in that manner to my answer you can easily understand what I want

    Read the article

  • Newbie in ASP.Net

    - by dnvThai
    I am learning ASP.Net and I am confusing between ASP.Net WebPages, ASP.Net WebForms and ASP.Net MVC. I have read a lot of articles and known the simple difference of their functions, but I don't know the differences of their code. E.g: When I look at int* p = new int(); ... I know that it's C++ style. and Dim A as String it have to be Visual Basic. [?1] I'm not able to detect like that in ASP.Net. How do they different in codes? I use Visual Studio 2010 Express Edition.I like to use C# (I also was learned VisualBasic in shool, but I don't like him). When I create a new project, there're too many types of project, then, I don't know which I should choose (I just want to make a simple site). [?2] What are they used to? Thanks

    Read the article

  • What is the aim of software testing?

    - by user970696
    Having read many books, there is a basic contradiction: Some say, "the goal of testing is to find bugs" while other say "the goal of the testing is to equalize the quality of the product", meaning that bugs are its by-products. I would also agree that if testing would be aimed primarily on a bug hunt, who would do the actual verification and actually provided the information, that the software is ready? Even e.g. Kaner changed his original definiton of testing goal from bug hunting to quality assesement provision but I still cannot see the clear difference. I percieve both as equally important. I can verify software by its specification to make sure it works and in that case, bugs found are just by products. But also I perform tests just to brake things. Also what definition is more accurate?

    Read the article

  • How to fix slow wireless with Intel 4965 AGN? [closed]

    - by mikewhatever
    Possible Duplicate: Slow wireless with an Intel 4965 We run Ubuntu 12.04, 32bit, with the current kernel 3.2.27-generic on an MSI EX700. I've already added the 11n_disable=1 tweek, without whcih, wireless has been unusable. Now, it works OK, but speedtest shows: Windows XP - down 11.68mbps, up 2.07mbps Ubuntu 12.04 - down 2.06mbps up 2.0mbps We've disabled ipv6, tried static and dinamic IPs, tried both swcrypto=0 and swcrypto=1 options, none of whcih made any difference. The problem may be the symptom of high packet loss. For example, here's the output of iwconfig after booting and testing the speeds: wlan0 IEEE 802.11abg ESSID:"amu" Mode:Managed Frequency:2.462 GHz Access Point: 00:78:9E:FA:32:C8 Bit Rate=54 Mb/s Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:off Link Quality=58/70 Signal level=-52 dBm Rx invalid nwid:0 Rx invalid crypt:0 Rx invalid frag:0 Tx excessive retries:11 Invalid misc:3627 Missed beacon:0 I've posted a help request before with lots of technical info and outputs.

    Read the article

  • Good Compression for Slow-mo Video

    - by marienbad
    What's the best way to deliver super slow-motion video to the browser? This seems to me to be a special case, because with super slow-mo video (such as 10,000 frames per second) the visual difference from frame to frame is minimal. As such, it's easy to compress highly. Please suggest codecs, as well as encoding software, backend software, software configuration tips, and services like youtube. My goal is to get about 100 frames of QVGA video to the browser in 500KB. By the way, remember that Radiohead In Rainbows site?

    Read the article

  • Why dd is not a reliable command to write bootable .iso files to USB thumb drive?

    - by Samik
    As the answers here indicate Ubuntu .iso s are not expected to boot if copied with dd to a USB thumb drive. Now my question is why is so that some Linux distributions have the option to directly write their bootable .iso file to a thumb drive with dd but some (read Ubuntu) have not(for Ubuntu I think it has to be converted to .img first). Is it for some architectural difference in .isos? Or is it due to any limitation of dd itself?I don't know if it is off-topic here. I can move it to a more proper place if the community thinks so or suggests one. Some explanation would be appreciable.

    Read the article

  • Skype installed or not installed, that is the question

    - by Merle
    After upgrading 11.10 to 12.04, I noticed that Skype 2.2.0.35 was no longer on the sidebar of icons. Found it in Dash and it runs but no sound. Figured I'd check Ubuntu Software Center and reinstall but it indicates a different version - 2.2.0.35-0precise3 - and indicates that Skype is not installed. Attempting to go ahead and install errors saying that it can't install when skype is already installed. sudo apt-get remove skype ...says "Package skype not installed" Tried to update apt-get but that didn't make any difference. Seems like it would be best to straighten this all out so it's right and, presumably, the newer version is probably better to have installed. Can anyone step me through how to do so?

    Read the article

  • Loading Texture2D is extremly slow on XBOX360

    - by AvrDragon
    I have ~100 sprites for each level im my XNA game. On windows it takes ~2 seconds to load them all. Unfortunately on XBOX360 it takes ~30-60 seconds. Am i doing something wrong? Essentially the loading code ist just like this: Texture2D sprite1 = levelContent.Load<Texture2D>("images/level_1/my_sprite_1"); ... Texture2D sprite100 = levelContent.Load<Texture2D>("images/level_1/my_sprite_100"); (i use an own content manager for each level to release all level-specific textures at once) Of course i can reduse the ammount of sprites using a spritesheet, but it's extremly painfull for me now. Do i have a better option? And just curious - why is there such huge difference in image loading time?

    Read the article

  • movement of sprites with kinect and xna

    - by pablopp83
    im working on a proyect with kinect sdk and xna 4.0. i need take the position of the hands and draw a sprite over it. im doing it directly and, because of that, i get a "trembling hands" effect. so, i was thinking on make the sprite move from the previous position to the new one, given in every frame by the new hand position. this way, the sprite does not jump from one position to another. this is working just fine, but im using a constant value for the velocity, and i really would like to use a variable velocity given by the difference between the prev and the new position. this is, if the hand move more quickly in the reality, the velocity will be higher. I really dont have a clue on how to make this works. can somebody point me in the right direction? thanks.

    Read the article

  • Flash site loads slowly

    - by bogdanvursu
    I have a simple html page that embeds an swf, that downloads other xml, swf and image files. The total count of the requests reaches about 90. I am aware that it should take a while until the content is available and I am OK with that. All the needed files are hosted by two different providers in the US: flashxml.net/monochrome-demo.html and u1.flashcomponents.net/samples/8751/index.html From two different countries in Europe, the content shows up a lot later (almost twice as later) from flashxml, than flashcomponents. I've done mtr tests and the ping difference is about 40ms and the flashxml server load is below 1. Do you have any other suggestions as to what should I look at?

    Read the article

  • Xubuntu LightDM shows blank screen half the time

    - by Sman789
    System info: (will be amended if any more info is asked for) My laptop runs Xubuntu 12.10. As it has a Solid State Drive, /tmp, /var/tmp, /var/log and /var/log/apt are set to tmpfs in the /etc/fstab file - in case this makes any difference. Problem My problem is quite simple. Approximately 50% of boot attempts end in the mouse cursor on a black screen (presumably LightDM failing to load), forcing me to restart and try again. I can access the CTRL+ALT+F1 terminal to reboot the machine, but it's very annoying having to boot and reboot two or three times before one works. Oh, and this problem is the same whether I use the Xubuntu or Unity greeter. Thanks for any help you can give.

    Read the article

< Previous Page | 122 123 124 125 126 127 128 129 130 131 132 133  | Next Page >