Search Results

Search found 740 results on 30 pages for 'processors'.

Page 17/30 | < Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >

  • Best SQL Server Configuration with this hardware.

    - by DavidStein
    I just received my new SQL Server from Dell. The server will be serve approximately 15 OLTP databases which average 10GB in size. Here are the basic specs: Dell PowerEdge R510 with up to 12 Hot Swap HDDs,LED Intel Xeon E5649 2.53GHz, 12M Cache, 5.86 GT/s QPI, 6 core (Quantity of 2) 48GB Memory (6x8GB), 1333MHz Dual Ranked RDIMMs for 2 Processors, Optimized PERC H700 Integrated RAID Controller, 1GB NV Cache 300GB 15K RPM SA SCSI 6Gbps 3.5in Hotplug Hard Drive (Quantity of 4) 600GB 15K RPM SA SCSI 6Gbps 3.5in Hotplug Hard Drive (Quantity of 6) My first thought was to use 3 arrays. OS - Raid 1 - (2)300GB T-Log - Raid 1 (2)300GB DB - Raid 5 (5) 600GB Backup - (1) 600GB - non-raided. However, I could also do the following after purchasing one more drive for backup. OS and T-Log - Raid 10 - (4)300GB DB - Raid 10 (6)600GB The hard drive space is not an issue as the databases are not that large. I'm just trying to optimize the speed of the applications using these databases. So, what would you guys recommend?

    Read the article

  • SQL Server 2008 Optimization

    - by hgulyan
    I've learned today, if you append to your query OPTION (MAXDOP 0) your query will run on multiple processors and if it's huge query, query will perform faster. I know general guidelines on query optimizations (using indexes, selecting only needed fields etc.), my question is about SQL Server optimization. Maybe changing some options in configurations or anything else. What guidelines are there for SQL Server Optimization? Thank you. P.S. I suppose, this is not the right place to ask server related questions. Should I delete it or maybe it can be migrated to serverfault?

    Read the article

  • Video issues with old server running Boxee

    - by Skaughty
    I have a old Dell Power Edge 1650 1U rackmount server running Windows XP. Dual 1.3 ghz Processors and 1 GB of RAM. I would like to run Boxee on it, but the 8mb integrated video card will not support it. The font within Boxee just comes up as blocks of color. I have a PCI Radion R9000 dual out video card. However when I plug it in I can not get video signal on any of the VGA outs. Is it possible to upgrade the video card on this old of a server, or is it possible to make it work with the older integrated video?

    Read the article

  • Web App Server hardware question. Which configuration?

    - by JBeckton
    I am pricing some new servers and I am not sure which configuration to get. The server will be running several web applications for our company. Some of them are ASP.Net sites and some are ColdFusion. The OS will be Win Server 2008 Web or Standard Edition. Do I need 2 processors or will a single quad core handle it? Xeon multi core Hyperthreading or non Hyperthreading? I am going 64bit so I can go higher than 4 Gigs of Ram. I am shopping at Dell and there are so many options, I want to get the most bang for my buck without going over budget and I also don't want the machine to be mostly under utilized.

    Read the article

  • ubuntu server 10 - slow and can't remove desktop environment

    - by Alex
    i'm running ubuntu server 10.10 with the desktop environment. simple page requests are taking over 5 seconds even when connecting to the server through our local network. i believe this is partially related to having the desktop environment installed, as the server worked faster (but not as fast as it should considering that it's on the local network), but tasksel fails every time (aptitude failed 100). my knowledge of networking and linux in general is limited. would really appreciate ideas on how i can troubleshoot this problem. oh also, in the system monitor, one of the processors is almost always around 100%. i doubt this is normal too....

    Read the article

  • How to get Windows Server 2008 VM to use multiple cores

    - by David Fraser
    I have a Windows Server 2008 machine running in VirtualBox. On initial installation, only one processor was made available, but now I want to run it as a multiprocessor machine. I have made all four cores available in the VirtualBox settings (as well as enabling VT-x/AMD-V and Nested Paging), but Task Manager still only shows one CPU. However, the four CPU cores are visible in Device Manager under Processors. In the event log on startup, I can see the following relevant events: EventLog.6009 Microsoft (R) Windows (R) 6.00.6002 Service Pack 2 Multiprocessor Free Kernel-Processor-Power.4 Processor 0 exposes the following: 1 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) How can I make this system actually boot up as a multiprocessor machine?

    Read the article

  • Gateway MX6440 CPU Upgrade

    - by BPugh
    I have received an a Gateway MX6440 laptop as a freebie, but I'm interested in upgrading its AMD Turion 64 ML-32 (socket 754) to something faster (and more cache). I know the range of processors that could work based on the family list in Wikipedia. However, this computer has the stock bios, and any updates I haven't applied from Gateway doesn't specify processor support. I'm looking to go to at least a 2.2 (ML-40). Has anybody upgraded the processor in this model or other in the series success or failure and do you happen to have any guides handy for working with the heat sink? Any Googling I have done keeps hitting RAM marketers. Update Computer died before I had a chance to try this out.

    Read the article

  • Will Software RAID And iSCSI Work For A SAN

    - by Justin
    I am looking for a SAN solution, but can't afford even entry level solutions. Basically, the SAN is for development and a proof of concept product. The performance doesn't have to be amazing, but needs to be functional. My buddy says we should just setup sotware RAID and software iSCSI in Linux. Essentially I have a spare server with dual Xeon processors, 4GB of memory, and (2) 500GB 7200RPM drives. It's a bit old but working. I am sure there is reason people don't do software RAID and iSCSI, but will performance be usable? Thinking of configuring the drives in RAID 0 (for performance).

    Read the article

  • How can I be sure that the motherboard is dead and it's not another issue?

    - by Peter
    So I have and old computer with and Intel D101GGC motherboard and a pentium 4 cpu and award bios, with a long repeating beep and not passing POST. I tried: RAM checking, they are good and working on other computers Replacing the PSU with a tested one, and still having the same issue I assumed that the CPU is dead and tested 2 others they are both Celeron D processors I also tested the CPU of this computer on another one and it's working fine I also tried it without RAM and still have the same long beep I did all this after disconnecting all the other hardware like HDD and DVD drives. The only time I didn't get any beeps is after removing the CPU I don't know if the processor is needed for the beeps to work. So My questions are: is it normal to not get any beeps if we don't have a CPU installed ? am I missing something or is the motherboard dead ? and thanks in advance.

    Read the article

  • cpufreq not available 11.10

    - by code shogan
    on 11.04 I had cpufreq working on my "AMD Turion(tm) 64 X2 Mobile Technology TL-50 stepping 02" processors, however now on oneiric cpufreq won't load. The core temperature of my cpu is normally 40 c, but lately it's cooking away at 75-80+ c and the fan is always extremely loud even when cpu usage has at 0.4%. and after this dmesg | grep -i cpu I got: Brought up 2 CPUs Switch to broadcast mode on CPU1 Switch to broadcast mode on CPU0 Switched to NOHz mode on CPU #1 Switched to NOHz mode on CPU #0 ACPI: acpi_idle registered with cpuidle cpufreq-nforce2: No nForce2 chipset. cpuidle: using governor ladder cpuidle: using governor menu powernow-k8: Found 1 AMD Turion(tm) 64 X2 Mobile Technology TL-50 (2 cpu cores) (version 2.20.00) I see something about governors and ladder there, does this mean the OS is able to scale my cpu's or not? If so is there a way I can determine it's working? I saw that for other users that the wrong module had been loaded and by disabling it they were able to get cpufreq loaded. How can I tell what scaling module is loaded? stats: Ubuntu Oneiric 32bit Dell Inspiron 1501

    Read the article

  • How does the build quality of laptops compare?

    - by pgwillia
    I'm looking to replace my 5 year old laptop. I want my next laptop to endure at least this long. I typically have Thunderbird, Firefox, Eclipse Java IDE, Skype, a ssh session, and Apache Tomcat running. I'm currently running Karmic Ubuntu, but am agnostic about operating system and would move to Win 7 or OS X. I frequently travel with this computer. I also value battery longevity and power conservation (if possible). Above all I'm looking to minimize cost. I think the hardware that best meets my needs is an Intel i7 processor, 8 GB RAM, 100GB @7200 rpm or SSD hardrive, and about 15 inch monitor. These specs are met by most brands. Does anyone know specific pros/cons and build quality for Macbook Pro, Lenovo Thinkpad (W510 or T510), Sony's VPC-F1190, and ASUS G Series G73JH-X1 NoteBook? Are all i7 processors created equal? Do you have other suggestion that meet my needs?

    Read the article

  • Is It Possible to Change Default Windows Idle Time for Task Scheduler?

    - by alharaka
    From the official Microsoft docs: Detecting the Idle State The Task Scheduler service will verify that the computer is in an idle state every 15 minutes. The computer is considered idle if all the processors and all the disks were idle for more than 90% of the past 15 minutes and if there is no keyboard or mouse input during this period of time. Besides, any presentation type application that sets the ES_DISPLAY_REQUIRED flag will make Task Scheduler to not consider the system as being idle. In Windows 7, Task Scheduler considers a processor as idle even when low priority threads (thread priority < normal) execute. Is there any way to change the time to less than 15 minutes minutes? Am I assuming this hard-coded and impossible? My Google-fu has eluded me so far and I found nothing, but wanted to check here before giving up.

    Read the article

  • How to get Windows Server 2008 VM to use multiple cores

    - by David Fraser
    I have a Windows Server 2008 machine running in VirtualBox. On initial installation, only one processor was made available, but now I want to run it as a multiprocessor machine. I have made all four cores available in the VirtualBox settings (as well as enabling VT-x/AMD-V and Nested Paging), but Task Manager still only shows one CPU. However, the four CPU cores are visible in Device Manager under Processors. In the event log on startup, I can see the following relevant events: EventLog.6009 Microsoft (R) Windows (R) 6.00.6002 Service Pack 2 Multiprocessor Free Kernel-Processor-Power.4 Processor 0 exposes the following: 1 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) Kernel-Processor-Power.4 Processor 255 exposes the following: 0 idle state(s), 0 performance state(s), 0 throttle state(s) How can I make this system actually boot up as a multiprocessor machine?

    Read the article

  • Will Software RAID And iSCSI Work For A SAN

    - by Justin
    I am looking for a SAN solution, but can't afford even entry level solutions. Basically, the SAN is for development and a proof of concept product. The performance doesn't have to be amazing, but needs to be functional. My buddy says we should just setup sotware RAID and software iSCSI in Linux. Essentially I have a spare server with dual Xeon processors, 4GB of memory, and (2) 500GB 7200RPM drives. It's a bit old but working. I am sure there is reason people don't do software RAID and iSCSI, but will performance be usable? Thinking of configuring the drives in RAID 0 (for performance).

    Read the article

  • ODI 11g – Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Does F# kill C++?

    - by MarkPearl
    Okay, so the title may be a little misleading… but I am currently travelling and so have had very little time and access to resources to do much fsharping – this has meant that I am right now missing my favourite new language. I was interested to see this post on Stack Overflow this evening concerning the performance of the F# language. The person posing the question asked 8 key points about the F# language, namely… How well does it do floating-point? Does it allow vector instructions How friendly is it towards optimizing compilers? How big a memory foot print does it have? Does it allow fine-grained control over memory locality? Does it have capacity for distributed memory processors, for example Cray? What features does it have that may be of interest to computational science where heavy number processing is involved? Are there actual scientific computing implementations that use it? Now, I don’t have much time to look into a decent response and to be honest I don’t know half of the answers to what he is asking, but it was interesting to see what was put up as an answer so far and would be interesting to get other peoples feedback on these questions if they know of anything other than what has been covered in the answer section already.

    Read the article

  • ODI 11g - Faster Files

    - by David Allan
    Deep in the trenches of ODI development I raised my head above the parapet to read a few odds and ends and then think why don’t they know this? Such as this article here – in the past customers (see forum) were told to use a staging route which has a big overhead for large files. This KM is an example of the great extensibility capabilities of ODI, its quite simple, just a new KM that; improves the out of the box experience – just build the mapping and the appropriate KM is used improves out of the box performance for file to file data movement. This improvement for out of the box handling for File to File data integration cases (from the 11.1.1.5.2 companion CD and on) dramatically speeds up the file integration handling. In the past I had seem some consultants write perl versions of the file to file integration case, now Oracle ships this KM to fill the gap. You can find the documentation for the IKM here. The KM uses pure java to perform the integration, using java.io classes to read and write the file in a pipe – it uses java threading in order to super-charge the file processing, and can process several source files at once when the datastore's resource name contains a wildcard. This is a big step for regular file processing on the way to super-charging big data files using Hadoop – the KM works with the lightweight agent and regular filesystems. So in my design below transforming a bunch of files, by default the IKM File to File (Java) knowledge module was assigned. I pointed the KM at my JDK (since the KM generates and compiles java), and I also increased the thread count to 2, to take advantage of my 2 processors. For my illustration I transformed (can also filter if desired) and moved about 1.3Gb with 2 threads in 140 seconds (with a single thread it took 220 seconds) - by no means was this on any super computer by the way. The great thing here is that it worked well out of the box from the design to the execution without any funky configuration, plus, and a big plus it was much faster than before, So if you are doing any file to file transformations, check it out!

    Read the article

  • Lesser known Ubuntu desktop applications

    - by becomingGuru
    So, this Ubuntu software center comes with 100s of applications of all types. In this version they have disabled rating, making it hard to find how good it is. I found gnome-shell today, that seemed awesome. There are other ones, less well known, For eg, Abiword is far better than Open Office Org Word processor in many ways. (Altho' I dont like word processors themselves.) What are the other less well known applications that you use and like. One application per answer.

    Read the article

  • Setting up Cluster Configuration using an existing web server as a Primary Node?

    - by RapidWebs
    Thanks in advance for any help which is issued! I am having a slight issue, and need help with the decision making process when it comes to setting up my Cluster Configuration, consisting on a line of Ubuntu Servers (12.04). We currently have a Primary node, which resides in the US within a Datacenter, but we are going to be using this for all serious bandwidth and resource intensive websites, and through a configuration of Virtualmin + Webmin, will be setup as a sort of pseudo-cluster, using Virtualmins Cluster Modules. Anyways, on to the issue: We also have a business line setup locally, with three servers. here are their specs: Intel P4 2.4 ghz, 1GB Ram, 110 gb sata, Ubuntu 12.04* AMD 1.3 ghz, 512MB Ram, 20 GB IDE P3 Xeon 800mhz (dual physical processors), 1GB Ram, 3 * 25 GB Raid Configuration (one in use for host operating system). The first machine is currently IN USE and is serving virtual hosts off a sub-domain. My question is this: How can I integrate the Secondary node (which will be the Primary node per say, in this smaller configuration...) which is currently in use, into the cluster configuration w/ the other two servers for: Sharing Resources Redundancy (HA?) NFS /w the two Raid Disks without having the FORMAT the secondary node, and start fresh moving all my services in to a DRBD network drive or something similar, and than restoring all active virtualmin's Virtual hosts. the idea is that I want minimal downtime to people currently being served from server2.mywebsite.com, and from what I understand, all services need to be on a NFS so that they can be mounted on demand and accessed from the other machine taking over (i.e. Heartbeat + DRBD Config.) but my issue is that i already have all these services installed to their default directory structure: how can i most easily setup this NFS and HA system, move all my desires services to this new drive, and do it with minimal down time, and without breaking Virtualmin and everything else on my server? even just some pointers, a thread i could read, or a step by step check list or run down of commands i could issue to get started would be great! thanks!

    Read the article

  • Rules to choose hardware for OLTP systems (sql server)

    - by Roman Pokrovskij
    Ok. We know database size, number of concurrent users, number of transactions per minute; should choose number of processors, RAID, RAM, mirroring and clustering. There are no exact rule.. but may be there are no rules at all? In my practice in every case I have "legacy" system, and after some inspections and interview I can form an opinion how hardware and design can be improved. But every time when I meet "absolutely" new system (I guess there are no new systems, but sometimes are such tasks) I can't say anything trustful. So I'm interesting how people deal with such tasks? They map task on theirs experience or have some base formulas?

    Read the article

  • OpenGL sprites and point size limitation

    - by Srdan
    I'm developing a simple particle system that should be able to perform on mobile devices (iOS, Andorid). My plan was to use GL_POINT_SPRITE/GL_PROGRAM_POINT_SIZE method because of it's efficiency (GL_POINTS are enough), but after some experimenting, I found myself in a trouble. Sprite size is limited (to usually 64 pixels). I'm calculating size using this formula gl_PointSize = in_point_size * some_factor / distance_to_camera to make particle sizes proportional to distance to camera. But at some point, when camera is close enough, problem with size limitation emerges and whole system starts looking unrealistic. Is there a way to avoid this problem? If no, what's alternative? I was thinking of manually generating billboard quad for each particle. Now, I have some questions about that approach. I guess minimum geometry data would be four vertices per particle and index array to make quads from these vertices (with GL_TRIANGLE_STRIP). Additionally, for each vertex I need a color and texture coordinate. I would put all that in an interleaved vertex array. But as you can see, there is much redundancy. All vertices of same particle share same color value, and four texture coordinates are same for all particles. Because of how glDrawArrays/Elements works, I see no way to optimise this. Do you know of a better approach on how to organise per-particle data? Should I use buffers or vertex arrays, or there is no difference because each time I have to update all particles' data. About particles simulation... Where to do it? On CPU or on a vertex processors? Something tells me that mobile's CPU would do it faster than it's vertex unit (at least today in 2012 :). So, any advice on how to make a simple and efficient particle system without particle size limitation, for mobile device, would be appreciated. (animation of camera passing through particles should be realistic)

    Read the article

  • Does immutability entirely eliminate the need for locks in multi-processor programming?

    - by GlenPeterson
    Part 1 Clearly Immutability minimizes the need for locks in multi-processor programming, but does it eliminate that need, or are there instances where immutability alone is not enough? It seems to me that you can only defer processing and encapsulate state so long before most programs have to actually DO something. If a program performs actions on multiple processors, something needs to collect and aggregate the results. All this involves multi-process communication before, after, and possibly during some transformations. The start and end state of the machines are different. Can this always be done with no locks just by throwing out each object and creating a new one instead of changing the original (a crude view of immutability)? What cases still require locking? I'm interested in both the theoretical/academic answer and the practical/real-world answer. I know a lot of functional programmers like to talk about "no side effect" but in the "real world" everything has a side effect. Every processor click takes time and electricity and machine resources away from other processes. So I understand that there may be more than one perspective to answer this question from. If immutability is safe, given certain bounds or assumptions, I want to know what the borders of the "safety zone" are exactly. Some examples of possible boundaries: I/O Exceptions/errors Interfaces with programs written in other languages Interfaces with other machines (physical, virtual, or theoretical) Special thanks to @JimmaHoffa for his comment which started this question! Part 2 Multi-processor programming is often used as an optimization technique - to make some code run faster. When is it faster to use locks vs. immutable objects? Given the limits set out in Amdahl's Law, when can you achieve better over-all performance (with or without the garbage collector taken into account) with immutable objects vs. locking mutable ones? Summary I'm combining these two questions into one to try to get at where the bounding box is for Immutability as a solution to threading problems.

    Read the article

  • Oracle R Distribution 2-13.2 Update Available

    - by Sherry LaMonica
    Oracle has released an update to the Oracle R Distribution, an Oracle-supported distribution of open source R. Oracle R Distribution 2-13.2 now contains the ability to dynamically link the following libraries on both Windows and Linux: The Intel Math Kernel Library (MKL) on Intel chips The AMD Core Math Library (ACML) on AMD chips To take advantage of the performance enhancements provided by Intel MKL or AMD ACML in Oracle R Distribution, simply add the MKL or ACML shared library directory to the LD_LIBRARY_PATH system environment variable. This automatically enables MKL or ACML to make use of all available processors, vastly speeding up linear algebra computations and eliminating the need to recompile R.  Even on a single core, the optimized algorithms in the Intel MKL libraries are faster than using R's standard BLAS library. Open-source R is linked to NetLib's BLAS libraries, but they are not multi-threaded and only use one core. While R's internal BLAS are efficient for most computations, it's possible to recompile R to link to a different, multi-threaded BLAS library to improve performance on eligible calculations. Compiling and linking to R yourself can be involved, but for many, the significantly improved calculation speed justifies the effort. Oracle R Distribution notably simplifies the process of using external math libraries by enabling R to auto-load MKL or ACML. For R commands that don't link to BLAS code, taking advantage of database parallelism using embedded R execution in Oracle R Enterprise is the route to improved performance. For more information about rebuilding R with different BLAS libraries, see the linear algebra section in the R Installation and Administration manual. As always, the Oracle R Distribution is available as a free download to anyone. Questions and comments are welcome on the Oracle R Forum.

    Read the article

  • Public Solaris/SPARC roadmap until 2015

    - by Karim Berrah
    It now public, and give you a nice overview on what's going on, where Oracle is going with Solaris and SPARC processors. It's now available from here. What can we lean from this roadmap ? well, if you look carefully: Oracle is announcing Solaris 11 this year. The release date should be ... check OOW11 Solaris 10 updates should still be released in 2012 (remember, released in 2005). Check the Solaris lifecycle to understand how long is Solaris to stay side by side with Solaris 11. in 2011, a great 3x Single Strand improvement for the T-Series. Some thing great under preparation. Probably revealed at Oracle Open World 2011. Good news for ISVs ! in 2012, a great 6x Troughput improvement for the M-Serie ! How can this be done ? .... Nearly everything on the SPARC/SOLARIS level is said through the public roadmap,but as you know the evil is in the details ;)

    Read the article

  • Japanese Multiplication simulation - is a program actually capable of improving calculation speed?

    - by jt0dd
    On SuperUser, I asked a (possibly silly) question about processors using mathematical shortcuts and would like to have a look at the possibility at the software application of that concept. I'd like to write a simulation of Japanese Multiplication to get benchmarks on large calculations utilizing the shortcut vs traditional CPU multiplication. I'm curious as to whether it makes sense to try this. My Question: I'd like to know whether or not a software math shortcut, as described above is actually a shortcut at all. This is a question of programming concept. By utilizing the simulation of Japanese Multiplication, is a program actually capable of improving calculation speed? Or am I doomed from the start? The answer to this question isn't required to determine whether or not the experiment will succeed, but rather whether or not it's logically possible for such a thing to occur in any program, using this concept as an example. My theory is that since addition is computed faster than multiplication, a simulation of Japanese multiplication may actually allow a program to multiply (large) numbers faster than the CPU arithmetic unit can. I think this would be a very interesting finding, if it proves to be true. If, in the multiplication of numbers of any immense size, the shortcut were to calculate the result via less instructions (or faster) than traditional ALU multiplication, I would consider the experiment a success.

    Read the article

< Previous Page | 13 14 15 16 17 18 19 20 21 22 23 24  | Next Page >