Search Results

Search found 22794 results on 912 pages for 'bit cost'.

Page 20/912 | < Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >

  • Fastest way to calculate an X-bit bitmask?

    - by Virtlink
    I have been trying to solve this problem for a while, but couldn't with just integer arithmetic and bitwise operators. However, I think its possible and it should be fairly easy. What am I missing? The problem: to get an integer value of arbitrary length (this is not relevant to the problem) with it's X least significant bits sets to 1 and the rest to 0. For example, given the number 31, I need to get an integer value which equals 0x7FFFFFFF (31 least significant bits are 1 and the rest zeros). Of course, using a loop OR-ing a shifted 1 to an integer X times will do the job. But that's not the solution I'm looking for. It should be more in the direction of (X << Y - 1), thus using no loops.

    Read the article

  • Characters problem in Bit.ly

    - by Fevos
    Hello, When i try to shorten a link with "#,&" Character i got an exception. is there a way to handle them . this is a sample of code that works String shortUrl = bitly.getShortUrl("http://z"); //Works but if i add for example '&' or '%25' to the string it will produce exption : - String shortUrl = bitly.getShortUrl("http://z%26"); // Exception - String shortUrl = bitly.getShortUrl("http://z&"); // Exception the getShortUrl function form this Java class: http://github.com/finnjohnsen/BitlyAndroid/raw/master/src/com/finnjohnsen/bitlyandroid/test/BitlyAndroid.java Thanks

    Read the article

  • Bit manipulation, permutate bits

    - by triktae
    Hi. I am trying to make a loop that loops through all different integers where exactly 10 of the last 40 bits are set high, the rest set low. The reason is that I have a map with 40 different values, and I want to sum all different ways ten of these values can be multiplied. (This is just out of curiosity, so it's really the "bitmanip"-loop that is of interest, not the sum as such.) If I were to do this with e.g. 2 out of 4 bits, it would be easy to set all manually, 0011 = 7, 0101 = 5, 1001 = 9, 0110 = 6, 1010 = 10, 1100 = 12, but with 10 out of 40 I can't seem to find a method to generate these effectively. I tried, starting with 1023 ( = 1111111111 in binary), finding a good way to manipulate this, but with no success. I've been trying to do this in C++, but it's really the general method(if any) that is of interest. I did some googling, but with little success, if anyone has a good link, that would of course be appreciated too. :)

    Read the article

  • Are bit operations quick?

    - by flashnik
    I'm dealing with a problem which needs to work with a lot of data. Currently its' values are represented as unsigned int. I know that real values do not exceed some limit, say 1000. That means that I can use unsigned short to store it. One profit is that it'll use less space. Do I have to pay for it by loosing in performance? Another assumption. I decided to store data as short but all calling functions use int, so I need to convert between these datatypes when storing/extracting values. Wiil the performance lost be dramatic? Third assumption. Due to great wish to econom memory I decided to use not short but just 10 bits packed into array of unsigned int. What will happen in this case comparing with previous ones?

    Read the article

  • c - difficulties with bit operations

    - by hatorade
    I'm debugging a program with GDB. unsigned int example = ~0; gives me: (gdb) x/4bt example 0xffd99788: 10101000 10010111 11011001 11111111 why is this not all 1's? i defined it as ~0... then the next line of code is: example>>=(31); and GDB gives me this when I try to examine the memory at bits: (gdb) x/4bt example 0xffffffff: Cannot access memory at address 0xffffffff what is going on???

    Read the article

  • Do bit operations cause programs to run slower?

    - by flashnik
    I'm dealing with a problem which needs to work with a lot of data. Currently its values are represented as an unsigned int. I know that real values do not exceed a limit of 1000. Questions I can use unsigned short to store it. An upside to this is that it'll use less storage space to store the value. Will performance suffer? If I decided to store data as short but all the calling functions use int, it's recognized that I need to convert between these datatypes when storing or extracting values. Will performance suffer? Will the loss in performance be dramatic? If I decided to not use short but just 10 bits packed into an array of unsigned int. What will happen in this case comparing with previous ones?

    Read the article

  • bit ordering and endianess

    - by Neeraj
    I am reading a file byte-by-byte. Say for example i have this byte: 0x41 (0100 0001) represented in hex. Now, I want the first three bits of this byte, i.e (010). I can use bitwise logic to extract the first three bits, but my question is will the first three bits be independent of endianess of the machine.(i.e they can't be 001)? Thanks,

    Read the article

  • C Population Count of unsigned 64-bit integer with a maximum value of 15

    - by BitTwiddler1011
    I use a population count (hamming weight) function intensively in a windows c application and have to optimize it as much as possible in order to boost performance. More than half the cases where I use the function I only need to know the value to a maximum of 15. The software will run on a wide range of processors, both old and new. I already make use of the POPCNT instruction when Intel's SSE4.2 or AMD's SSE4a is present, but would like to optimize the software implementation (used as a fall back if no SSE4 is present) as much as possible. Currently I have the following software implementation of the function: inline int population_count64(unsigned __int64 w) { w -= (w 1) & 0x5555555555555555ULL; w = (w & 0x3333333333333333ULL) + ((w 2) & 0x3333333333333333ULL); w = (w + (w 4)) & 0x0f0f0f0f0f0f0f0fULL; return int(w * 0x0101010101010101ULL) 56; } So to summarize: (1) I would like to know if it is possible to optimize this for the case when I only want to know the value to a maximum of 15. (2) Is there a faster software implementation (for both Intel and AMD CPU's) than the function above?

    Read the article

  • OWB 11gR2 - Windows and Linux 64-bit clients available

    - by David Allan
    In addition to the integrated release of OWB in the 11.2.0.3 Oracle database distribution, the following 64-bit standalone clients are now available for download from Oracle Support. OWB 11.2.0.3 Standalone client for Windows 64-bit - 13365470 OWB 11.2.0.3 Standalone client for Linux X86 64-bit - 13366327 This is in addition to the previously released 32-bit client on Windows. OWB 11.2.0.3 Standalone client for Windows 32-bit - 13365457 The support document Major OWB 11.2.0.3 New Features Summary has details for OWB 11.2.0.3 which include the following. Exadata v2 and oracle Database 11gR2 support capabilities; Support for Oracle Database 11gR2 and Exadata compression types Even more partitioning: Range-Range, Composite Hash/List, System, Reference Transparent Data Encryption support Data Guard support/certification Compiled PL/SQL code generation Capabilities to support data warehouse ETL best practices; Read and write Oracle Data Pump files with external tables External table preprocessor Partition specific DML Bulk data movement code templates: Oracle, IBM DB2, Microsoft SQL Server to Oracle Integration with Fusion Middleware capabilities; Support OWB's Control Center Agent on WLS Lots of interesting capabilities in 11.2.0.3 and the availability of the 64-bit client I'm sure is welcome news for many!

    Read the article

  • SQL SERVER – Fix: Error: 8117: Operand data type bit is invalid for sum operator

    - by pinaldave
    Here is the very interesting error I received from a reader. He has very interesting question. He attempted to use BIT filed in the SUM aggregation function and he got following error. He went ahead with various different datatype (i.e. INT, TINYINT etc) and he was able to do the SUM but with BIT he faced the problem. Error Received: Msg 8117, Level 16, State 1, Line 1 Operand data type bit is invalid for sum operator. Reproduction of the error: Set up the environment USE tempdb GO -- Preparing Sample Data CREATE TABLE TestTable (ID INT, Flag BIT) GO INSERT INTO TestTable (ID, Flag) SELECT 1, 0 UNION ALL SELECT 2, 1 UNION ALL SELECT 3, 0 UNION ALL SELECT 4, 1 GO SELECT * FROM TestTable GO Following script will work fine: -- This will work fine SELECT SUM(ID) FROM TestTable GO However following generate error: -- This will generate error SELECT SUM(Flag) FROM TestTable GO The workaround is to convert or cast the BIT to INT: -- Workaround of error SELECT SUM(CONVERT(INT, Flag)) FROM TestTable GO Clean up the setup -- Clean up DROP TABLE TestTable GO Workaround: As mentioned in above script the workaround is to covert the bit datatype to another friendly data types like INT, TINYINT etc. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • how can I get 32-bit program to run on 64-bit Ubuntu?

    - by Carol
    Sorry to be asking this, but I have read quite a few posts and articles a lot of places wrt the issue I am having, to no avail. I am trying to get a Second Life Viewer (Firestorm) to run, and just keep getting the '64-bit error message' it throws. I have installed every 32-lib I can find, still doesn't work. I think I am surely missing some setting somewhere, or running Firestorm from the wrong place, or something, but I have no idea what. FWIW, Firestorm loads but doesn't behave right in the 32-bit version, either. I have actually tried several linux distros, 32 and 64-bit. Mint 32-bit runs it straight off, and Mint 64-bit throws the '64-bit error'. openSUSE, any version, won't run it at all. Oh, and all the other SL viewers I have tried behave the same way. I am beginning to wonder if my set-up just doesn't like linux. Here is my system info: CPU: Intel(R) Core(TM) i5 CPU 750 @ 2.67GHz (2661 MHz) Memory: 4026 MB OS Version: Linux 3.2.0-29-generic-pae #46-Ubuntu SMP Fri Jul 27 17:25:43 UTC 2012 i686 Graphics Card Vendor: ATI Technologies Inc. Graphics Card: ATI Radeon HD 5700 Series I appreciate any help anyone can give me! Thanks so much! Carol :)

    Read the article

  • How to do a cost-benefit analysis for platform-level features?

    - by Callister Park
    I work on a development team that works closely with Product Managers. There is mutual agreement between the developers and Product Managers that there should be a business case behind every feature the development team builds. My question is, what is an effective way to make a business case for platform-level features that have higher up front cost but will provide long term benefits? For example, the development team would like to implement a plug-in framework. There is the higher up-front cost to implement a plug-in framework but delivering the subsequent features as plug-ins will be cheaper in the long run. This is obvious to everyone including the Product Managers. Is there a standard/simple way to express the cost-benefits? Is there a simple way to visualize it with a graph?

    Read the article

  • Running 32 bit assembly code on a 64 bit Linux & 64 bit Processor : Expalin the anomaly.

    - by claws
    Hello, I'm in an interesting problem.I forgot I'm using 64bit machine & OS and wrote a 32 bit assembly code. I don't know how to write 64 bit code. This is the x86 32-bit assembly code for Gnu Assembler (AT&T syntax) on Linux. #include <asm/unistd.h> #include <syscall.h> #define STDOUT 1 .data hellostr: .ascii "hello wolrd\n"; helloend: .text .globl _start _start: movl $(SYS_write) , %eax //ssize_t write(int fd, const void *buf, size_t count); movl $(STDOUT) , %ebx movl $hellostr , %ecx movl $(helloend-hellostr) , %edx int $0x80 movl $(SYS_exit), %eax //void _exit(int status); xorl %ebx, %ebx int $0x80 ret Now, This code should run fine on a 32bit processor & 32 bit OS right? As we know 64 bit processors are backward compatible with 32 bit processors. So, that also wouldn't be a problem. The problem arises because of differences in system calls & call mechanism in 64-bit OS & 32-bit OS. I don't know why but they changed the system call numbers between 32-bit linux & 64-bit linux. asm/unistd_32.h defines: #define __NR_write 4 #define __NR_exit 1 asm/unistd_64.h defines: #define __NR_write 1 #define __NR_exit 60 Anyway using Macros instead of direct numbers is paid off. Its ensuring correct system call numbers. when I assemble & link & run the program. Its not printing helloworld. In gdb its showing: Program exited with code 01. I don't know how to debug in gdb. using tutorial I tried to debug it and execute instruction by instruction checking registers at each step. its always showing me "program exited with 01". It would be great if some on could show me how to debug this. (gdb) break _start Note: breakpoint -10 also set at pc 0x4000b0. Breakpoint 8 at 0x4000b0 (gdb) start Function "main" not defined. Make breakpoint pending on future shared library load? (y or [n]) y Temporary breakpoint 9 (main) pending. Starting program: /home/claws/helloworld Program exited with code 01. (gdb) info breakpoints Num Type Disp Enb Address What 8 breakpoint keep y 0x00000000004000b0 <_start> 9 breakpoint del y <PENDING> main I tried running strace. This is its output: execve("./helloworld", ["./helloworld"], [/* 39 vars */]) = 0 write(0, NULL, 12 <unfinished ... exit status 1> Explain the parameters of write(0, NULL, 12) system call in the output of strace? What exactly is happening? I want to know the reason why exactly its exiting with exitstatus=1? Can some one please show me how to debug this program using gdb? Why did they change the system call numbers? Change this program appropriately so that it can run correctly on this machine.

    Read the article

  • Best criteria for choosing a DNS provider : Redundancy, Locations, Cost, IPV6, Reliability

    - by antoinet
    What criteria should I use to choose a good DNS provider? Redundancy - Your DNS service should use at least 4 nameservers. You should also check for the use of anycast servers such as Amazon Route 53 and dyn.com services. Worldwide server location - Servers shall be located worldwide, not just in one country! Ipv6 support - It shall be possible to declare an AAAA entry to your server if it supports IPV6 Cost is of course an issue. Some service are free, Amazon Route 53 seems quite cheap. Reliability : SLA is also important, it demonstrate that reliability is measured. Your dns provider shall then state for a refund in case a failure is encountered. Anything else? For reference, a couple of links for more information: http://serverfault.com/questions/216330/why-should-i-use-amazon-route-53-over-my-registrars-dns-servers http://aws.amazon.com/route53/ http://dyn.com/dns/

    Read the article

  • New Year's Resolution: Highest Availability at the Lowest Cost

    - by margaret.hamburger(at)oracle.com
    Don't miss this Webcast: Achieve 24/7 Cloud Availability Without Expensive Redundancy Event Date: 01/11/2011 10:00 AM Pacific Standard Time You'll learn how Oracle's Maximum Availability Architecture and Oracle Database 11g help you: Achieve the highest availability at the lowest cost Protect your systems from unplanned downtime Eliminate idle redundancy Register Now! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • Free or Low Cost Web Hosting for Small Website [duplicate]

    - by etangins
    This question already has an answer here: How to find web hosting that meets my requirements? 5 answers I have a small website (between 2000 and 10000) page-views a day. I'm looking for a free or low cost web host. I tried 50webs.com but their server breaks down. So as not to cause debate, I am also just looking for links to good information sources for web hosting if just finding a good web host is too general. I currently only use HTML, CSS, and JavaScript though I'm considering learning PhP and other more advanced languages to step up my game.

    Read the article

  • The cost of longer delay between development and QA

    - by Neil N
    At my current position, QA has become a bottleneck. We have had the unfortunate occurence of features being held out of the current build so that QA could finish testing. This means features that are done being developed may not get tested for 2-3 weeks after the developer has already moved on. With dev moving faster thean QA, this time gap is only going to get bigger. I keep flipping through my copy of Code Compelte, looking for a "Hard Data" snippet that shows the cost of fixing defects grows exponentially the longer it exists. Can someone point me to some studies that back up this concept? I am trying to convince the powers that be that the QA bottleneck is a lot more costly than they think.

    Read the article

  • The practical cost of swapping effects

    - by sebf
    I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • The practical cost of swapping effects

    - by sebf
    Hello, I use XNA for my projects and on those forums I sometimes see references to the fact that swapping an effect for a mesh has a relatively high cost, which surprises me as I thought to swap an effect was simply a case of copying the replacement shader program to the GPU along with appropriate parameters. I wondered if someone could explain exactly what is costly about this process? And put, if possible, 'relatively' into context? For example say I wanted to use a short shader to help with picking, I would: Change the effect on every object, calculting a unique color to identify it and providing it to the shader. Draw all the objects to a render target in memory. Get the color from the target and use it to look up the selected object. What portion of the total time taken to complete that process would be spent swapping the shaders? My instincts would say that rendering the scene again, no matter how simple the shader, would be an order of magnitude slower than any other part of the process so why all the concern over effects?

    Read the article

  • Any significant performance cost to using BlendState.Premultiplied?

    - by Donutz
    Normally I guess you'd use BlendState.AlphaBlend because normally when you load your textures through the pipeline they're already premultiplied. However, if you're loading textures at runtime from PNGs or some such, you have to loop through the pixels and premultiply them, which can take a long time if you've got a lot of textures to load. So it looks (haven't tried it) like using BlendState.Premultiplied instead of BlendState.AlphaBlend should handle non-premultiplied textures and produce the same visual result, without all the startup costs. I have to wonder if there's a non-obvious cost to doing this, like a huge drop in performance or something. Anyone know?

    Read the article

  • Cost of maintenance depending on paradigms

    - by Anto
    Is there any data on which paradigms allow for code which is easier/cheaper to maintain? Certainly, independantly of the chosen paradigm, good design is cheaper to maintain than bad, but there should probably be major differences coming only from the paradigm choice. Unstructured programming, for instance, generates very messy code (spaghetti code) which is expensive to maintain. In object oriented programming, implementation details are hidden and thus it should be pretty cheap to change those. In functional programming, there are no side effects, thus there is lesser risk of introducing bugs during maintainance, which should be cheaper. Is there any data on which paradigms are the most cost-efficient when coming down to maintenance? If no such data exists, what is your take on the question?

    Read the article

  • Cloud computing cost savings for large enterprise

    - by user13817
    I'm trying to understand whether cloud computing is meant for small to medium sized companies OR also for large companies. Imagine a website with a very large user base. The storage and bandwidth demands as well as the number of database transactions are incredibly high. The website might be hosting videos, music, images, etc. that keep the demands high. Does it make sense to be in the cloud when you know you need huge volumes of storage, bandwidth, and GET,PUT,etc. requests? (Each of these variables costs money in the cloud) OR does it make sense to build your own infrastructure? I can see the cost savings of cloud computing if you are a small business, but if you were aiming at the next big thing on the Internet, I can't quite see the benefits.

    Read the article

< Previous Page | 16 17 18 19 20 21 22 23 24 25 26 27  | Next Page >