Search Results

Search found 1774 results on 71 pages for 'parallel'.

Page 40/71 | < Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >

  • RiverTrail - JavaScript GPPGU Data Parallelism

    - by JoshReuben
    Where is WebCL ? The Khronos WebCL working group is working on a JavaScript binding to the OpenCL standard so that HTML 5 compliant browsers can host GPGPU web apps – e.g. for image processing or physics for WebGL games - http://www.khronos.org/webcl/ . While Nokia & Samsung have some protype WebCL APIs, Intel has one-upped them with a higher level of abstraction: RiverTrail. Intro to RiverTrail Intel Labs JavaScript RiverTrail provides GPU accelerated SIMD data-parallelism in web applications via a familiar JavaScript programming paradigm. It extends JavaScript with simple deterministic data-parallel constructs that are translated at runtime into a low-level hardware abstraction layer. With its high-level JS API, programmers do not have to learn a new language or explicitly manage threads, orchestrate shared data synchronization or scheduling. It has been proposed as a draft specification to ECMA a (known as ECMA strawman). RiverTrail runs in all popular browsers (except I.E. of course). To get started, download a prebuilt version https://github.com/downloads/RiverTrail/RiverTrail/rivertrail-0.17.xpi , install Intel's OpenCL SDK http://www.intel.com/go/opencl and try out the interactive River Trail shell http://rivertrail.github.com/interactive For a video overview, see  http://www.youtube.com/watch?v=jueg6zB5XaM . ParallelArray the ParallelArray type is the central component of this API & is a JS object that contains ordered collections of scalars – i.e. multidimensional uniform arrays. A shape property describes the dimensionality and size– e.g. a 2D RGBA image will have shape [height, width, 4]. ParallelArrays are immutable & fluent – they are manipulated by invoking methods on them which produce new ParallelArray objects. ParallelArray supports several constructors over arrays, functions & even the canvas. // Create an empty Parallel Array var pa = new ParallelArray(); // pa0 = <>   // Create a ParallelArray out of a nested JS array. // Note that the inner arrays are also ParallelArrays var pa = new ParallelArray([ [0,1], [2,3], [4,5] ]); // pa1 = <<0,1>, <2,3>, <4.5>>   // Create a two-dimensional ParallelArray with shape [3, 2] using the comprehension constructor var pa = new ParallelArray([3, 2], function(iv){return iv[0] * iv[1];}); // pa7 = <<0,0>, <0,1>, <0,2>>   // Create a ParallelArray from canvas.  This creates a PA with shape [w, h, 4], var pa = new ParallelArray(canvas); // pa8 = CanvasPixelArray   ParallelArray exposes fluent API functions that take an elemental JS function for data manipulation: map, combine, scan, filter, and scatter that return a new ParallelArray. Other functions are scalar - reduce  returns a scalar value & get returns the value located at a given index. The onus is on the developer to ensure that the elemental function does not defeat data parallelization optimization (avoid global var manipulation, recursion). For reduce & scan, order is not guaranteed - the onus is on the dev to provide an elemental function that is commutative and associative so that scan will be deterministic – E.g. Sum is associative, but Avg is not. map Applies a provided elemental function to each element of the source array and stores the result in the corresponding position in the result array. The map method is shape preserving & index free - can not inspect neighboring values. // Adding one to each element. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.map(function inc(v) {     return v+1; }); //<2,3,4,5,6> combine Combine is similar to map, except an index is provided. This allows elemental functions to access elements from the source array relative to the one at the current index position. While the map method operates on the outermost dimension only, combine, can choose how deep to traverse - it provides a depth argument to specify the number of dimensions it iterates over. The elemental function of combine accesses the source array & the current index within it - element is computed by calling the get method of the source ParallelArray object with index i as argument. It requires more code but is more expressive. var source = new ParallelArray([1,2,3,4,5]); var plusOne = source.combine(function inc(i) { return this.get(i)+1; }); reduce reduces the elements from an array to a single scalar result – e.g. Sum. // Calculate the sum of the elements var source = new ParallelArray([1,2,3,4,5]); var sum = source.reduce(function plus(a,b) { return a+b; }); scan Like reduce, but stores the intermediate results – return a ParallelArray whose ith elements is the results of using the elemental function to reduce the elements between 0 and I in the original ParallelArray. // do a partial sum var source = new ParallelArray([1,2,3,4,5]); var psum = source.scan(function plus(a,b) { return a+b; }); //<1, 3, 6, 10, 15> scatter a reordering function - specify for a certain source index where it should be stored in the result array. An optional conflict function can prevent an exception if two source values are assigned the same position of the result: var source = new ParallelArray([1,2,3,4,5]); var reorder = source.scatter([4,0,3,1,2]); // <2, 4, 5, 3, 1> // if there is a conflict use the max. use 33 as a default value. var reorder = source.scatter([4,0,3,4,2], 33, function max(a, b) {return a>b?a:b; }); //<2, 33, 5, 3, 4> filter // filter out values that are not even var source = new ParallelArray([1,2,3,4,5]); var even = source.filter(function even(iv) { return (this.get(iv) % 2) == 0; }); // <2,4> Flatten used to collapse the outer dimensions of an array into a single dimension. pa = new ParallelArray([ [1,2], [3,4] ]); // <<1,2>,<3,4>> pa.flatten(); // <1,2,3,4> Partition used to restore the original shape of the array. var pa = new ParallelArray([1,2,3,4]); // <1,2,3,4> pa.partition(2); // <<1,2>,<3,4>> Get return value found at the indices or undefined if no such value exists. var pa = new ParallelArray([0,1,2,3,4], [10,11,12,13,14], [20,21,22,23,24]) pa.get([1,1]); // 11 pa.get([1]); // <10,11,12,13,14>

    Read the article

  • parallel_for_each from amp.h – part 1

    - by Daniel Moth
    This posts assumes that you've read my other C++ AMP posts on index<N> and extent<N>, as well as about the restrict modifier. It also assumes you are familiar with C++ lambdas (if not, follow my links to C++ documentation). Basic structure and parameters Now we are ready for part 1 of the description of the new overload for the concurrency::parallel_for_each function. The basic new parallel_for_each method signature returns void and accepts two parameters: a grid<N> (think of it as an alias to extent) a restrict(direct3d) lambda, whose signature is such that it returns void and accepts an index of the same rank as the grid So it looks something like this (with generous returns for more palatable formatting) assuming we are dealing with a 2-dimensional space: // some_code_A parallel_for_each( g, // g is of type grid<2> [ ](index<2> idx) restrict(direct3d) { // kernel code } ); // some_code_B The parallel_for_each will execute the body of the lambda (which must have the restrict modifier), on the GPU. We also call the lambda body the "kernel". The kernel will be executed multiple times, once per scheduled GPU thread. The only difference in each execution is the value of the index object (aka as the GPU thread ID in this context) that gets passed to your kernel code. The number of GPU threads (and the values of each index) is determined by the grid object you pass, as described next. You know that grid is simply a wrapper on extent. In this context, one way to think about it is that the extent generates a number of index objects. So for the example above, if your grid was setup by some_code_A as follows: extent<2> e(2,3); grid<2> g(e); ...then given that: e.size()==6, e[0]==2, and e[1]=3 ...the six index<2> objects it generates (and hence the values that your lambda would receive) are:    (0,0) (1,0) (0,1) (1,1) (0,2) (1,2) So what the above means is that the lambda body with the algorithm that you wrote will get executed 6 times and the index<2> object you receive each time will have one of the values just listed above (of course, each one will only appear once, the order is indeterminate, and they are likely to call your code at the same exact time). Obviously, in real GPU programming, you'd typically be scheduling thousands if not millions of threads, not just 6. If you've been following along you should be thinking: "that is all fine and makes sense, but what can I do in the kernel since I passed nothing else meaningful to it, and it is not returning any values out to me?" Passing data in and out It is a good question, and in data parallel algorithms indeed you typically want to pass some data in, perform some operation, and then typically return some results out. The way you pass data into the kernel, is by capturing variables in the lambda (again, if you are not familiar with them, follow the links about C++ lambdas), and the way you use data after the kernel is done executing is simply by using those same variables. In the example above, the lambda was written in a fairly useless way with an empty capture list: [ ](index<2> idx) restrict(direct3d), where the empty square brackets means that no variables were captured. If instead I write it like this [&](index<2> idx) restrict(direct3d), then all variables in the some_code_A region are made available to the lambda by reference, but as soon as I try to use any of those variables in the lambda, I will receive a compiler error. This has to do with one of the direct3d restrictions, where only one type can be capture by reference: objects of the new concurrency::array class that I'll introduce in the next post (suffice for now to think of it as a container of data). If I write the lambda line like this [=](index<2> idx) restrict(direct3d), all variables in the some_code_A region are made available to the lambda by value. This works for some types (e.g. an integer), but not for all, as per the restrictions for direct3d. In particular, no useful data classes work except for one new type we introduce with C++ AMP: objects of the new concurrency::array_view class, that I'll introduce in the post after next. Also note that if you capture some variable by value, you could use it as input to your algorithm, but you wouldn’t be able to observe changes to it after the parallel_for_each call (e.g. in some_code_B region since it was passed by value) – the exception to this rule is the array_view since (as we'll see in a future post) it is a wrapper for data, not a container. Finally, for completeness, you can write your lambda, e.g. like this [av, &ar](index<2> idx) restrict(direct3d) where av is a variable of type array_view and ar is a variable of type array - the point being you can be very specific about what variables you capture and how. So it looks like from a large data perspective you can only capture array and array_view objects in the lambda (that is how you pass data to your kernel) and then use the many threads that call your code (each with a unique index) to perform some operation. You can also capture some limited types by value, as input only. When the last thread completes execution of your lambda, the data in the array_view or array are ready to be used in the some_code_B region. We'll talk more about all this in future posts… (a)synchronous Please note that the parallel_for_each executes as if synchronous to the calling code, but in reality, it is asynchronous. I.e. once the parallel_for_each call is made and the kernel has been passed to the runtime, the some_code_B region continues to execute immediately by the CPU thread, while in parallel the kernel is executed by the GPU threads. However, if you try to access the (array or array_view) data that you captured in the lambda in the some_code_B region, your code will block until the results become available. Hence the correct statement: the parallel_for_each is as-if synchronous in terms of visible side-effects, but asynchronous in reality.   That's all for now, we'll revisit the parallel_for_each description, once we introduce properly array and array_view – coming next. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • BI Applications overview

    - by sv744
    Welcome to Oracle BI applications blog! This blog will talk about various features, general roadmap, description of functionality and implementation steps related to Oracle BI applications. In the first post we start with an overview of the BI apps and will delve deeper into some of the topics below in the upcoming weeks and months. If there are other topics you would like us to talk about, pl feel free to provide feedback on that. The Oracle BI applications are a set of pre-built applications that enable pervasive BI by providing role-based insight for each functional area, including sales, service, marketing, contact center, finance, supplier/supply chain, HR/workforce, and executive management. For example, Sales Analytics includes role-based applications for sales executives, sales management, as well as front-line sales reps, each of whom have different needs. The applications integrate and transform data from a range of enterprise sources—including Siebel, Oracle, PeopleSoft, SAP, and others—into actionable intelligence for each business function and user role. This blog  starts with the key benefits and characteristics of Oracle BI applications. In a series of subsequent blogs, each of these points will be explained in detail. Why BI apps? Demonstrate the value of BI to a business user, show reports / dashboards / model that can answer their business questions as part of the sales cycle. Demonstrate technical feasibility of BI project and significantly lower risk and improve success Build Vs Buy benefit Don’t have to start with a blank sheet of paper. Help consolidate disparate systems Data integration in M&A situations Insulate BI consumers from changes in the OLTP Present OLTP data and highlight issues of poor data / missing data – and improve data quality and accuracy Prebuilt Integrations BI apps support prebuilt integrations against leading ERP sources: Fusion Applications, E- Business Suite, Peoplesoft, JD Edwards, Siebel, SAP Co-developed with inputs from functional experts in BI and Applications teams. Out of the box dimensional model to source model mappings Multi source and Multi Instance support Rich Data Model    BI apps have a very rich dimensionsal data model built over 10 years that incorporates best practises from BI modeling perspective as well as reflect the source system complexities  Thanks for reading a long post, and be on the lookout for future posts.  We will look forward to your valuable feedback on these topics as well as suggestions on what other topics would you like us to cover. I Conformed dimensional model across all business subject areas allows cross functional reporting, e.g. customer / supplier 360 Over 360 fact tables across 7 product areas CRM – 145, SCM – 47, Financials – 28, Procurement – 20, HCM – 27, Projects – 18, Campus Solutions – 21, PLM - 56 Supported by 300 physical dimensions Support for extensive calendars; Gregorian, enterprise and ledger based Conformed data model and metrics for real time vs warehouse based reporting  Multi-tenant enabled Extensive BI related transformations BI apps ETL and data integration support various transformations required for dimensional models and reporting requirements. All these have been distilled into common patterns and abstracted logic which can be readily reused across different modules Slowly Changing Dimension support Hierarchy flattening support Row / Column Hybrid Hierarchy Flattening As Is vs. As Was hierarchy support Currency Conversion :-  Support for 3 corporate, CRM, ledger and transaction currencies UOM conversion Internationalization / Localization Dynamic Data translations Code standardization (Domains) Historical Snapshots Cycle and process lifecycle computations Balance Facts Equalization of GL accounting chartfields/segments Standardized values for categorizing GL accounts Reconciliation between GL and subledgers to track accounted/transferred/posted transactions to GL Materialization of data only available through costly and complex APIs e.g. Fusion Payroll, EBS / Fusion Accruals Complex event Interpretation of source data – E.g. o    What constitutes a transfer o    Deriving supervisors via position hierarchy o    Deriving primary assignment in PSFT o    Categorizing and transposition to measures of Payroll Balances to specific metrics to support side by side comparison of measures of for example Fixed Salary, Variable Salary, Tax, Bonus, Overtime Payments. o    Counting of Events – E.g. converting events to fact counters so that for example the number of hires can easily be added up and compared alongside the total transfers and terminations. Multi pass processing of multiple sources e.g. headcount, salary, promotion, performance to allow side to side comparison. Adding value to data to aid analysis through banding, additional domain classifications and groupings to allow higher level analytical reporting and data discovery Calculation of complex measures examples: o    COGs, DSO, DPO, Inventory turns  etc o    Transfers within a Hierarchy or out of / into a hierarchy relative to view point in hierarchy. Configurability and Extensibility support  BI apps offer support for extensibility for various entities as automated extensibility or part of extension methodology Key Flex fields and Descriptive Flex support  Extensible attribute support (JDE)  Conformed Domains ETL Architecture BI apps offer a modular adapter architecture which allows support of multiple product lines into a single conformed model Multi Source Multi Technology Orchestration – creates load plan taking into account task dependencies and customers deployment to generate a plan based on a customers of multiple complex etl tasks Plan optimization allowing parallel ETL tasks Oracle: Bit map indexes and partition management High availability support    Follow the sun support. TCO BI apps support several utilities / capabilities that help with overall total cost of ownership and ensure a rapid implementation Improved cost of ownership – lower cost to deploy On-going support for new versions of the source application Task based setups flows Data Lineage Functional setup performed in Web UI by Functional person Configuration Test to Production support Security BI apps support both data and object security enabling implementations to quickly configure the application as per the reporting security needs Fine grain object security at report / dashboard and presentation catalog level Data Security integration with source systems  Extensible to support external data security rules Extensive Set of KPIs Over 7000 base and derived metrics across all modules Time series calculations (YoY, % growth etc) Common Currency and UOM reporting Cross subject area KPIs (analyzing HR vs GL data, drill from GL to AP/AR, etc) Prebuilt reports and dashboards 3000+ prebuilt reports supporting a large number of industries Hundreds of role based dashboards Dynamic currency conversion at dashboard level Highly tuned Performance The BI apps have been tuned over the years for both a very performant ETL and dashboard performance. The applications use best practises and advanced database features to enable the best possible performance. Optimized data model for BI and analytic queries Prebuilt aggregates& the ability for customers to create their own aggregates easily on warehouse facts allows for scalable end user performance Incremental extracts and loads Incremental Aggregate build Automatic table index and statistics management Parallel ETL loads Source system deletes handling Low latency extract with Golden Gate Micro ETL support Bitmap Indexes Partitioning support Modularized deployment, start small and add other subject areas seamlessly Source Specfic Staging and Real Time Schema Support for source specific operational reporting schema for EBS, PSFT, Siebel and JDE Application Integrations The BI apps also allow for integration with source systems as well as other applications that provide value add through BI and enable BI consumption during operational decision making Embedded dashboards for Fusion, EBS and Siebel applications Action Link support Marketing Segmentation Sales Predictor Dashboard Territory Management External Integrations The BI apps data integration choices include support for loading extenral data External data enrichment choices : UNSPSC, Item class etc. Extensible Spend Classification Broad Deployment Choices Exalytics support Databases :  Oracle, Exadata, Teradata, DB2, MSSQL ETL tool of choice : ODI (coming), Informatica Extensible and Customizable Extensible architecture and Methodology to add custom and external content Upgradable across releases

    Read the article

  • Installing OpenSSL that supports SNI along with previous version of OpenSSL

    - by gh0sT
    So I learned that to host multiple HTTPS websites on the same IP address you need an OpenSSL version that supports SNI (0.9.8f and higher). My RHEL5 box currently has 0.9.8e and Apache version httpd-2.2.26-2.el5. According to a same question here it's not a good idea to replace the original version of OpenSSL and instead to have a parallel installation. It however doesn't explicitly mention how to achieve this. So my questions are: How do I have an alternate installation of OpenSSL without breaking the system? How do I make Apache to use this version of OpenSSL and not the original one? A detailed guide would be extremely helpful.

    Read the article

  • What is the difference between Windows RT and Windows Phone 8?

    - by Rakib Ansary
    From what I have read it seems there are more or less three versions(?) of Windows 8: Windows 8, Windows RT, and Windows Phone 8. While the difference between Windows 8 and Windows RT is clear, I don't understand the difference between Windows RT and Windows Phone 8. The Android parallel, Jelly Bean that runs on Tablets and on Phones doesn't have any differences. Are there any differences between Windows RT and Windows Phone 8 except for the fact that one is for Tablets (Windows RT) and the other is for Phones (Windows Phone 8)?

    Read the article

  • WinXP How to Tunnel LPT over USB

    - by Michael Pruitt
    I have a windows program that accesses a device connected to a LPT (1-3) 25 pin port. The communication is bidirectional, and I suspected the control lines are also accessed directly. I would like to migrate the device to a machine that does not have a LPT port. I saw the dos2usb software, but that takes the output (from a DOS program) and 'prints' it formatted for a specific printer. I need a raw LPT connection, and a cable that provides access to all the control signals. I do have a USB to 36-pin Centronics that may have the extra signals. I use it with a vinyl cutter that doesn't like most of the USB dongles. It comes up as USB001. Would adding and sharing a generic printer, then mapping LPT1 to the share get me closer? Would that work for a parallel port scanner? My preferred solution is a USB cable with a driver that will map it to LPT1, LPT2, or LPT3.

    Read the article

  • Shareing two internet connections on my laptop running Windows XP

    - by ashwnacharya
    I have two internet connections, one is internet via our organization's corporate LAN network, and the other one is mobile broadband via a USB modem Is there anyway I can share internet connections and use them simultaneously? I want to use the corporate LAN network for normal browsing and connecting my email client, and I want to use the USB modem for establishing a VPN connection. Will I be able to maintain both the connections simultaneously? Can I have parallel downloads, one using our corporate network, and the other one using the mobile broadband? Will I be able to switch my browser between these two connections? My laptop runs Windows XP Service Pack 2.

    Read the article

  • Linux software to maintain old/backup versions of directory tree

    - by Bittrance
    I am replacing an old Linux file server serving NFS and CIFS. For the new server (still serving CIFS and NFS), I would like to have software that automatically and efficiently maintains old revisions of files in parallel trees, so that they can be accessed by users without special tools. I am looking for software that is akin to Time Machine or Flyback, but works well on a server. The dataset is some 10000 files weighing maybe 60 GB. Changes are relatively few, usually less than 100 files changes daily. Using LVM snapshots will not cut it, as the old revisions must reside on a separate set of disks from the live data. Edit: To clarify: keeping old revisions is non-vital addition to the solution, so any suggestion will have to stay in the range of some hundred euros.

    Read the article

  • What effect does RAID stripe size have on read-ahead settings?

    - by stbrody
    I'm trying to figure out the correct read-ahead values to set on a RAID10 array, and I'm wondering if the RAID stripe size should factor into my considerations. I've heard conflicting information about this in the past. I once heard that you should always set your read-ahead value to a multiple of the RAID stripe size, and never below the stripe size, because that is the minimum amount of data the RAID controller will ever try to read at once. Someone else told me, however, that setting read-ahead below the stripe size is fine, and can, in fact, increase the amount of parallel reads you can do across devices in the array, increasing performance and decreasing load on the array. So which is it? Do read-ahead settings that aren't multiples of the stripe size make sense or not?

    Read the article

  • Slower than expected 802.11n wireless network speeds

    - by Ian
    I have two ASUS laptops running Windows 7 connected wirelessly via 802.11n at 150 Mbit, as reported by Task Manager. The router is Netgear WNDR3700. When testing the wireless connection speed using iperf, I'm not getting nearly 150 Mbit: C:\>iperf -c 10.0.0.123 -t 30 ------------------------------------------------------------ Client connecting to 10.0.0.123, TCP port 5001 TCP window size: 8.00 KByte (default) ------------------------------------------------------------ [148] local 10.0.0.116 port 53819 connected with 10.0.0.123 port 5001 [ ID] Interval Transfer Bandwidth [148] 0.0-30.0 sec 41.2 MBytes 11.5 Mbits/sec That's a typical result. Running parallel client threads does not increase the overall total speed. Why would I only be getting 11.5 Mbit on a 150 Mbit connection?

    Read the article

  • Is it possible to run two separate instances (copies, versions) of Audacity?

    - by cipricus
    What I want is two separate instances of Audacity running in parallel as to separate programs with different settings. I want one to use WASAPI as audio host and speakers as input device (the 2.0.5 portable has the WASAPI option), and the other to use the MME as audio host, and microphone as input device. When opening a new (second) window of the same installation/copy of Audacity, by executing twice the .exe file, it is the same as giving the command 'new' under 'File', what happens is not a second instance, but just a separate project/file and window. When settings are changed in one, they change exactly in the other. When trying to start a second copy of Audacity, or even a second different version, when one is already running, it will also open just a second project/file of the application that is already running, as before.

    Read the article

  • Drive configuration for 5 large databases

    - by Mr. Flibble
    I've got 5 databases, each 300GB, currently on a RAID 5 array consisting of 5 drives. All the databases are used heavily, at the same time, so drive speed is an issue. Would I see better performance if I got rid of the RAID 5 configuration and just put each database on a separate drive? The redundancy provided by RAID 5 is not necessary due to mirroring elsewhere. Will the server then be able to perform reads / writes to different databases drives in parallel? More so at least than when it's in RAID? This is all on Windows 2003 / SQL 2008.

    Read the article

  • All nework interfaces hang for seconds while one interface goes up/down

    - by user3698377
    I am building a client/server application that uses several network interfaces in parallel for redundancy, and I have noticed that while one network interface goes down or goes up, the communication on other interfaces hangs for several seconds. I could reproduce this behavior without my application in a simple way: there are 2 interfaces available on computer 1 ( Ethernet and WiFi ) ping from computer 2 the IP address of the Ethernet connection of computer 1 disconnect the WiFi of computer 1 ping hangs for seconds, and then the packets are traveling again between the 2 computers. The hanging happens as well if I turn back on the WiFi connection on computer 1. It happens as well if I ping the WiFi IP, and turn off/on the Ethernet connection ( or unplug/plug the cable). I am using Linux Ubuntu 12.04 on both computers. Any ideas why is this happening, and if / how can it be avoided?

    Read the article

  • SharePoint 2010 Search - not search additional content sources

    - by Chris W
    I've got SP 2010 crawling a secondary intranet system that we'll run in parallel as part of a long running migration to SharePoint when it releases. Whilst it's crawling the pages without problem I can't see how to get the results to appear as part of the Quick Search results if the user does a search from the little search dialog box on the home page. Searches completed within a My Sites pages lists results from port the SharePoint installation and the external content source. Searches from the main search dialog only list results of SharePoint items. I tried adding the drop down option to select the site to search but this list only includes the name of the current site and doesn't offer an 'All Sites' scope option which I think would include the content. What am I doing wrong?

    Read the article

  • Hyperthreading vs. SQL Server & PostgreSQL

    - by IanC
    I have read that hyperthreading is a "performance killer" when it comes to DBs. However, what I read didn't state which CPUs. Further, it mostly indicated that I/O was "cut to < 10% performance". That logically doesn't make sense since I/O is primarily a function of controllers and disks, not CPUs. But then no one ever said bugs made sense. What I read also stated that SQL Server could put two parallel query ops onto 1 logical core (2 threads), thereby degrading performance. I have a hard time believing SQL Server's architects would have made such an obvious miscalculation. Does anyone have and data on how hyperthreading on current generation CPUs affects either of the RDBMSs I mentioned?

    Read the article

  • Print server does not show up on router's attached devices

    - by AshTee
    Recently I bought a new more powerful wireless N router, DLink DIR 628. So I removed all connections from the previous router (Netgear WGT624) and connected them as they should be to the DLInk router. Everything works fine except for the print server. I have Hawking print server connected to HP Laserjet 6P parallel port printer. It works well with the Netgear router. But when I connect it to the DLInk router, it does not even show up in the LAN computers list. I am not sure what is going on. There is a utility called PSAdmin that can talk to the Hawking print server if I switch to Netgear router. With that utility, I can get the assigned IP address to the print server. But when switching to DLink router, even the PSAdmin fails to find the print server. I have been trying various things for last couple of days in vein. Please help.

    Read the article

  • When my printer fails to print with only "printer error" as the message, how can I find details?

    - by lavinio
    Setup: Dell Latitude D620 in the docking station HP color laserjet 2550L Parallel cable Sometimes when I undock (and I do suspend before undocking and docking), when I reattach, the printer will not print. It simply says "error", with no description. Killing and restarting the spooler will not help, but rebooting will. There is nothing in the system event logs, nor does the print spooler window provide any details other than "error" My question is, when there is an error, is there any way to find out what is causing it to get stuck, so that I can "unstuck" it instead of rebooting?

    Read the article

  • How to rebuild a Li Ion laptop battery?

    - by spoulson
    I have an aging Gateway NX560XL laptop. The battery is toast and a new one, even aftermarket, starts at $130. So, to experiment, I began tearing apart the old battery to see what can be done. I found it used 8 standard size 18650 Li Ion cells arranged two cells parallel then in series (like: ====). Some online shopping revealed ~$7-13/ea replacements depending on mAh output. My plan is to load test to determine the bad cells and replace only those, as I read that typically only 1 or 2 may be bad. I'm proficient with soldering, however these cells are attached with welded tabs. Some of them broke during disassembly and I'm not sure how to reattach them. What I found online are cells like these that have solder tabs pre-welded to the ends so I can solder wires onto. Is there any guide available that provides the instructions and parts to do this kind of rebuild?

    Read the article

  • Computer Components

    - by Martin
    What is the role of a motherboard in terms of how components communicate with the processor (via the system busses). Therefore, for each component, which communication bus is involved and what is the route that data takes from the component to the processor, including; The bus name Whether bus is serial or parallel The name of any bridges involved Also, what is the the role of a bridge The components are: Internal components: Floppy Drive, Hard Drive, CD/DVD Drive, Memory, Processor Power supply, Graphics card & Sound card External devices: Monitor, Keyboard, Mouse (PS2 & USB), Printer, Pen Drive I have absolutely no idea of what the routes are and how motherboards communicate, could someone give me a start here? Thanks.

    Read the article

  • Nokia backup on a Mac

    - by Shyam
    Hi there! As I have to bring my phone to a shop for repair, I want to backup my contacts, calenders and text-messages. My Nokia connects perfectly through Bluetooth with iSync. One baddy, however is that text-messages are nowhere to be found or for the matter of fact, impossible to backup from the phone using iSync. Is there a graceful, free application for Mac that would be able to backup (and later restore) the messages on my phone? The worst possible scenario would include me to write a script that uses the Hayes command set and kermit-ize all SMS's (hundreds at least), so a nice click-and-play solution would be nice to know about. I don't consider applications like Parallel/CrossOver as a solution, as PC Suite is quite buggy with those (which does have the functionality to backup SMS and e-mail). Many thanks!

    Read the article

  • Why does Windows 7 need hardware virtualization to run XP mode?

    - by Ken Pespisa
    I have a MacBook Pro and I've run VMware Fusion's unity mode and Parallels' cohesion mode along side the Mac OS X, and both work pretty seamlessly. I figured XP Mode in Windows 7 would be something similar, but I then learned my machine requires hardware virtualization support, which it does not have. My machine is an HP dc7800. That's a dual core 2.2GHz machine with 4GBs of RAM. Certainly it has the horsepower to run a virtual environment alongside the primary OS. I'm wondering: 1) Why Microsoft decided to make hardware virtualization a requirement and 2) What am I missing? Is the experience similar to Parallel's cohesion mode / Fusion's unity mode? Thanks!

    Read the article

  • Seeking free project planner with auto resource levelling

    - by mawg
    As the title says, I am seeking a project planner with automatic resource levelling. It must be free for commercial use and a bonus, but not requirement, would be MS project import/export. I like the look of Task Juggler, but it is at a stage of development hovering between v2 and v3. Anything else? Basically, I want to play "what if games" - I will determine the tasks and their effort, and the dependencies between them and then try to figure out how many staff I need. Since many of the tasks can be done in parallel, it is difficult to guess how many are needed. A PM tool with automatic resource levelling seems like a good way to find out.

    Read the article

  • Multiple servers acting like a single one with all the hardware?

    - by marc.riera
    Hello, by now I have 10 servers for hpc, power computing oriented. My users need to launch several processes using qmake. The users are used to work with ubuntu 9.10, and the software from the repositories is switable for them. I've deployed ubuntu 9.10 to all 10 servers (pxe rocks). By now we work with parallel-ssh and cluster-ssh, which allows as to launch the same process to all servers. With this tools this tools the servers remain as independent but with the same software and the same launched command. Now we would like to go to next step and see all the servers as a single one with all the resources from the other 9 as if was its resources. The difference would be substantial in time to process and also time to design the command to launch. Any advice on wich software to use will be very useful? Thanks

    Read the article

  • Why is USB-sticks so much slower than Solid State Drives?

    - by Jonas
    From what I understand, USB flash memory and Solid State Drives are based on similar technologies, NAND flash memory. But USB-sticks is usually quite slow with a read and write speed of 5-10MB per second while Solid State Drives usually is very fast, usually 100-570MB per second. Why are Solid State Drives so much faster than USB-sticks? And why isn't USB-sticks faster than 5-10MB per second? Is it simply that SSD-drives uses parallel access to the NAND flash memory or are there other reasons?

    Read the article

< Previous Page | 36 37 38 39 40 41 42 43 44 45 46 47  | Next Page >