Search Results

Search found 11737 results on 470 pages for 'intel core i7'.

Page 32/470 | < Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >

  • Will Apple abandon OpenCL?

    - by John
    I am developing OpenCL applications, amongst others for MacOS. The new Macbook pro 13 inch comes with an Intel HD Graphics 3000 card so it seams reasonable to assume all their mainstream computers like Macbook and Mac mini will also come out with this Intel graphics card soon. OpenCL is not available for Intel graphics cards. Intel having a terrible reputation in developing graphics drivers and Apple knowing this makes me wonder Apple is abandoning OpenCL already again. Especially considering OpenCL should run anywhere, not only on high end systems. Developing applications only for the high end Macs with dedicated graphics hardware or for previous generation hardware with the Geforce 320M would not be a feasible option for me. Does anybody have any thoughts on this?

    Read the article

  • Intel Atom D425 Problems

    - by Nico
    I have recently bought an Intel Atom D425 mini itx board with OCZ 30 Gig SSD and 1 Gig DDR3 ram. I have installed Ubuntu 10.04 and Crunchbang Statler ok; works fine in both. I tried to install Xubuntu 12.04, Ubuntu 12.04, and Mint 13 Mate. All of those 3 start up, but as soon as you get to install it all goes wrong and the monitor says input not supported. Is it an Intel Atom D425 problem, an Ubuntu problem, or something else? All help would be much appreciated Regards Nico

    Read the article

  • Will Apple abandon OpenCL?

    - by John
    I am developing OpenCL applications, amongst others for MacOS. The new Macbook pro 13 inch comes with an Intel HD Graphics 3000 card so it seams reasonable to assume all their mainstream computers like Macbook and Mac mini will also come out with this Intel graphics card soon. OpenCL is not available for Intel graphics cards. Intel having a terrible reputation in developing graphics drivers and Apple knowing this makes me wonder Apple is abandoning OpenCL already again. Especially considering OpenCL should run anywhere, not only on high end systems. Developing applications only for the high end Macs with dedicated graphics hardware or for previous generation hardware with the Geforce 320M would not be a feasible option for me. Does anybody have any thoughts on this?

    Read the article

  • Trouble with 12.10 lag

    - by Brennan
    Well basicly lately I have been having lag problems with 12.10. I will post my specs, but before the update to 12.10, it said that I had intel graphics. Now it says I have Gallium. My specs: *Memory: 3.9 GiB *Processor: Pentium(R) Dual-Core CPU E5500 @ 2.80GHz × 2 *Graphics: Gallium 0.4 on llvmpipe (LLVM 3.2, 128 bits) (used to say intel graphics) *OS Type: 32-bit *Disk: 486.1 GB The output of the command sudo apt-cache check is this: E: Invalid operation check The output of the command sudo lspci -nnk | grep -A5 VGA is this: 00:02.0 VGA compatible controller [0300]: Intel Corporation 4 Series Chipset Integrated Graphics Controller [8086:2e32] (rev 03) Subsystem: ASUSTeK Computer Inc. Device [1043:836d] Kernel driver in use: i915 00:1b.0 Audio device [0403]: Intel Corporation NM10/ICH7 Family High Definition Audio Controller [8086:27d8] (rev 01) Subsystem: ASUSTeK Computer Inc. Device [1043:8445] Kernel driver in use: snd_hda_intel

    Read the article

  • Switching from Debug into Release Mode with VS2010 as IDE and Intel C++ Compiler 13

    - by Drazick
    I have a code of a Plug In from an SDK. The code is in Debug Mode. I use Intel Compiler which only applies optimizations in Release Mode. Under configuration manager of the project only "Debug" mode is defined. How could I switch to "Release" mode and enable all Intel Compiler's optimizations? If I enable them on debug mode nothing is applied (Empty Report). I couldn't find the trick to do so. Thank You.

    Read the article

  • Audio output and input stopped working after the last update

    - by renatov
    I'm using Ubuntu 14.04 and everything was perfect until todays's update. Now my audio output (speakers) and input (microphone) stopped working. I guess it's a driver issue, but I need help to debug this problem and to solve it. I have a Dell Inspiron 5421 notebook with an Intel audio integrated sound card: $ lspci | grep Audio 00:1b.0 Audio device: Intel Corporation 7 Series/C210 Series Chipset Family High Definition Audio Controller (rev 04) If I go to Ubuntu Settings Sound Output, it doesn't show my Intel card there anymore: The same for the Input tab, it doesn's show my Intel card there anymore: Could you please help me?

    Read the article

  • Why doesn't Unity 3D work on my HD3000 integrated graphics?

    - by Zatsugami
    So I got my new laptop recently. HP Envy 15 with switchable graphics card. I'm using both windows and ubuntu, but for ubuntu I need just Intel HD3000 for better battery live. I've installed fglrx-updates and fglrx-amdcccle-updates. The drivers seems to work for my ATI card. The problem is, Intel HD3000 does not support Unity 3D. Why? My older Intel GMA 4500 did this. lspci -nn | grep VGA 00:02.0 VGA compatible controller [0300]: Intel Corporation 2nd Generation Core Processor Family Integrated Graphics Controller [8086:0126] (rev 09) 01:00.0 VGA compatible controller [0300]: Advanced Micro Devices [AMD] nee ATI Whistler [AMD Radeon HD 6600M Series] [1002:6741] (rev ff)

    Read the article

  • AMD défie Intel sur le marché des serveurs et sort des puces à 12 coeurs, Intel relativise

    AMD défie Intel sur le marché des serveurs Et sort des puces à 12 c?urs, Intel relativise AMD vient de lancer une nouvelle gamme de puces pour serveurs baptisée Magny-Cours. Ces puces embarquent entre 8 et 12 c?urs et visent clairement à prendre une position dominante sur ce marché professionnel. La gamme se compose des 6 références suivantes : Opteron 6128 : 8 c?urs, 1,5 GHz Opteron 6134 :8 c?urs, 1,7 GHz Opteron 6136 : 8 c?urs, 2,4 GHz Opteron 6168 : 12 c?urs, 1,9 GHz Opteron 6172 :12 c?urs, 2,1 GHz Opteron 6174 : 12 c?urs, 2,2 GHz De son coté...

    Read the article

  • IIS Express only utilizes 13% of i7 Quad Core

    - by John Nevermore
    Since one of my scripts got incredibly complex, i was benchmarking the performance of moving some javascript processing logic to the server side in my ASP.NET MVC 4 application. According to taskmgr.exe, IIS Express only utilizes 13% of my i7. I decided to throw in 3 parallel tasks calculating the fibonacci sequence up to 50 and the IIS express still wouldn't utilize more than 13% of my cpu. Is there anything i can do, so that the application utilizes the full cpu, as it would in a real server ?

    Read the article

  • Possible to use Python with Intel's Atom Developer SDK (C/C++)?

    - by Jordan Magnuson
    So I've made a game in Python and PyGame. Now I'm interested in submitting the game to Intel's March Developer Challenge. However, the developer challenge requires use of Intel's Atom Developer SDK (http://appdeveloper.intel.com/en-us/sdk), which only has API's for C and C++. I'm new to Python and PyGame, and have no experience in C or C++. My question is, would it be possible to somehow implement Intel's Atom SDK through/with/from a Python application (as the first link above suggests)? I've read up a little bit on embedding/extending Python into/with C, but I'm not entirely sure what to embed or where. I mean, I know I can do things like this in C: #include <Python.h> int main(int argc, char *argv[]) { Py_Initialize(); PyRun_SimpleString("from time import time,ctime\n" "print 'Today is',ctime(time())\n"); Py_Finalize(); return 0; } But what do I do about all my dependencies on Python and Pygame, for people that don't have those installed on their machines? Normally Py2Exe takes care of compacting the required dependencies (I've managed to package my game into an exe/zip), but how do I take care of that stuff in the context of embedding within C? Can I somehow work with py2exe on this, or do I need to do something entirely different for embedding within C? It seems like it would be a lot easier to go the route of extending Python with the C validation code, rather than trying to embed my whole game within C, but I think that's not an option, "because the library provided is currently only available as a Visual Studio 2008 '.lib'", meaning the application has to be compiled with Visual Studio...? Any help, thoughts, or ideas are much appreciated! You can find the complete SDK Developer's Guide on the intel site above, but here is their "Hello World" using the C Language API: #include <stdio.h> #include “adpcore.h” int main( int argc, char* argv[] ) { ADP_RET_CODE ret_code; const ADP_APPLICATIONID myApplicationID = {{ 0x12345678,0x11112222,0x33331234,0x567890ab}}; if ((ret_code = ADP_Initialize()) != ADP_SUCCESS ){ printf( “ERROR: exiting” ); exit( -1 ); } if (( ret_code = ADP_IsAuthorized( myApplicationId )) == ADP_AUTHORIZED ) printf( “Hello World” ); else printf( “Not authorized to run” ); exit 0; } 35 Page SDK Developer Guide: http:// appdeveloper.intel.com/sites/files/pages/SDK%20Developer%20Guide.pdf

    Read the article

  • Xorg server won't start after fresh install of Debian 5.04, screen goes blank (Intel Atom D510(Pinet

    - by Kamil Zadora
    Hello, I have installed Debian 5.04 Lenny on my new Intel D510MO motheboard. I fixed some issues with incorrect drive mapping (for some reason during installation my hdd was on sdb, after a restart it is under sda - fixed in grub), and now I am struggling with getting graphical enviroment up and running, I installed the graphical enviroment using the Debian installer. I am not an linux expert by any means, I assume that I need to edit the xorg.conf file. Any hints appreciated! UPDATE1: No change after dpkg-reconfigure xserver-xorg Here is my current xorg.conf: # xorg.conf (X.Org X Window System server configuration file) # # This file was generated by dexconf, the Debian X Configuration tool, using # values from the debconf database. # # Edit this file with caution, and see the xorg.conf manual page. # (Type "man xorg.conf" at the shell prompt.) # # This file is automatically updated on xserver-xorg package upgrades *only* # if it has not been modified since the last upgrade of the xserver-xorg # package. # # If you have edited this file but would like it to be automatically updated # again, run the following command: # sudo dpkg-reconfigure -phigh xserver-xorg Section "InputDevice" Identifier "Generic Keyboard" Driver "kbd" Option "XkbRules" "xorg" Option "XkbModel" "pc105" Option "XkbLayout" "pl" EndSection Section "InputDevice" Identifier "Configured Mouse" Driver "mouse" EndSection Section "Device" Identifier "Configured Video Device" EndSection Section "Monitor" Identifier "Configured Monitor" EndSection Section "Screen" Identifier "Default Screen" Monitor "Configured Monitor" EndSection UPDATE2: I have installed vnc4server package. I can connect over vnc from my windows 7 laptop and i see empty desktop with terminal window open. Seems that the xserver and gdm are running but they cant talk with my GPU. I am not sure if a can use any gui tool to configure it overthe vnc, as all I see is the terminal window, no taskbars etc. UPDATE3: My current Xorg.0.log http://pastebin.pl/18918 The graphic chipset integrated into the D510 processor is Intel 945GC

    Read the article

  • (Mac Intel) HP PS driver prints in B&W from Adobe Reader after installing Cannon PS driver

    - by John B
    I have a unique problem that leaves me at a loss as to where to start troubleshooting. We have three Macs we use for graphics, two of which are PowerPC and one which is Intel. They are set up to print to an HP 5500dn, but occasionally this printer gets tied up with a massive print job, so I installed the PS driver (iR-PSv1.81MacOSX) for the Cannon C3200 Printer/copier on each of the machines. Both of the PowerPC Macs installed without issue, but the Intel Mac exhibits strange behavior: I've confirmed that while the Cannon driver is installed (whether or not the Cannon is set up for printing in print settings), the HP 5500dn will print in color from Safari, but only prints in black and white from Adobe Reader. The Cannon printer itself has not exhibited any strange behavior As soon as the Cannon driver is uninstalled, the HP 5500dn prints in color from Adobe Reader again. We run a network of Windows PCs, and the 'Mac room' mostly takes care of itself, so we don't have any experienced Mac administrators onsite. The Cannon is capable of Appletalk, but the PS driver seemed easier to work with (and Appletalk is currently disable on the Cannon. I'm not against using the Appletalk compatible drivers, but I would rather use the PS driver if at all possible - I don't want to open up the proverbial can of worms. If someone has any clues or suggestions that would help troubleshoot this problem, I would be grateful. I've already done some googling, but due to the obscure nature of this problem, I haven't been very successful.

    Read the article

  • Lubuntu wireless issue with Broadcom chipset

    - by Variant Web Solutions
    I'm a web dev that just started a new venture in buying wiped laptops in bulk and selling them at a low cost to lower income families that want a laptop that will simply preform and not become virus ridden and require constant maintenance, so naturally I opted for a linux distro and after some research, lubuntu was my top pick. I'm not a stranger to linux as all of my servers for my web dev business are linux, but I am however new to L/Ubuntu and I'm having some issues with both wifi (broadcom chipsets) on the 20 or so dells that I have right now, D800's, D810's and E5400's. Not sure if you can point me in a solid direction, I've scoured (and implemented) the suggestions on ask ubuntu and still coming up short. On one of the e5400's ( though they all seem to suffer the same errors) I got the following: [code]dell-latitude-e5400@dell-Latitude-E5400:~$ iwconfig lo no wireless extensions. eth0 no wireless extensions. dell-latitude-e5400@dell-Latitude-E5400:~$ lspci 00:00.0 Host bridge: Intel Corporation Mobile 4 Series Chipset Memory Controller Hub (rev 07) 00:02.0 VGA compatible controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:02.1 Display controller: Intel Corporation Mobile 4 Series Chipset Integrated Graphics Controller (rev 07) 00:1a.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #4 (rev 02) 00:1a.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #5 (rev 02) 00:1a.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #6 (rev 02) 00:1a.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #2 (rev 02) 00:1b.0 Audio device: Intel Corporation 82801I (ICH9 Family) HD Audio Controller (rev 02) 00:1c.0 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 1 (rev 02) 00:1c.1 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 2 (rev 02) 00:1c.4 PCI bridge: Intel Corporation 82801I (ICH9 Family) PCI Express Port 5 (rev 02) 00:1d.0 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #1 (rev 02) 00:1d.1 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #2 (rev 02) 00:1d.2 USB controller: Intel Corporation 82801I (ICH9 Family) USB UHCI Controller #3 (rev 02) 00:1d.7 USB controller: Intel Corporation 82801I (ICH9 Family) USB2 EHCI Controller #1 (rev 02) 00:1e.0 PCI bridge: Intel Corporation 82801 Mobile PCI Bridge (rev 92) 00:1f.0 ISA bridge: Intel Corporation ICH9M LPC Interface Controller (rev 02) 00:1f.2 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 00:1f.3 SMBus: Intel Corporation 82801I (ICH9 Family) SMBus Controller (rev 02) 00:1f.5 IDE interface: Intel Corporation 82801IBM/IEM (ICH9M/ICH9M-E) 2 port SATA Controller [IDE mode] (rev 02) 02:01.0 CardBus bridge: Ricoh Co Ltd RL5c476 II (rev ba) 02:01.1 FireWire (IEEE 1394): Ricoh Co Ltd R5C832 IEEE 1394 Controller (rev 04) 02:01.2 SD Host controller: Ricoh Co Ltd R5C822 SD/SDIO/MMC/MS/MSPro Host Adapter (rev 21) 09:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM5761e Gigabit Ethernet PCIe (rev 10) dell-latitude-e5400@dell-Latitude-E5400:~$ rfkill Usage: rfkill [options] command Options: --version show version (0.5-1ubuntu1 (Ubuntu)) Commands: help event list [IDENTIFIER] block IDENTIFIER unblock IDENTIFIER where IDENTIFIER is the index no. of an rfkill switch or one of: all wifi wlan bluetooth uwb ultrawideband wimax wwan gps fm nfc dell-latitude-e5400@dell-Latitude-E5400:~$ [/code]

    Read the article

  • IBM "per core" comparisons for SPECjEnterprise2010

    - by jhenning
    I recently stumbled upon a blog entry from Roman Kharkovski (an IBM employee) comparing some SPECjEnterprise2010 results for IBM vs. Oracle. Mr. Kharkovski's blog claims that SPARC delivers half the transactions per core vs. POWER7. Prior to any argument, I should say that my predisposition is to like Mr. Kharkovski, because he says that his blog is intended to be factual; that the intent is to try to avoid marketing hype and FUD tactic; and mostly because he features a picture of himself wearing a bike helmet (me too). Therefore, in a spirit of technical argument, rather than FUD fight, there are a few areas in his comparison that should be discussed. Scaling is not free For any benchmark, if a small system scores 13k using quantity R1 of some resource, and a big system scores 57k using quantity R2 of that resource, then, sure, it's tempting to divide: is  13k/R1 > 57k/R2 ? It is tempting, but not necessarily educational. The problem is that scaling is not free. Building big systems is harder than building small systems. Scoring  13k/R1  on a little system provides no guarantee whatsoever that one can sustain that ratio when attempting to handle more than 4 times as many users. Choosing the denominator radically changes the picture When ratios are used, one can vastly manipulate appearances by the choice of denominator. In this case, lots of choices are available for the resource to be compared (R1 and R2 above). IBM chooses to put cores in the denominator. Mr. Kharkovski provides some reasons for that choice in his blog entry. And yet, it should be noted that the very concept of a core is: arbitrary: not necessarily comparable across vendors; fluid: modern chips shift chip resources in response to load; and invisible: unless you have a microscope, you can't see it. By contrast, one can actually see processor chips with the naked eye, and they are a bit easier to count. If we put chips in the denominator instead of cores, we get: 13161.07 EjOPS / 4 chips = 3290 EjOPS per chip for IBM vs 57422.17 EjOPS / 16 chips = 3588 EjOPS per chip for Oracle The choice of denominator makes all the difference in the appearance. Speaking for myself, dividing by chips just seems to make more sense, because: I can see chips and count them; and I can accurately compare the number of chips in my system to the count in some other vendor's system; and Tthe probability of being able to continue to accurately count them over the next 10 years of microprocessor development seems higher than the probability of being able to accurately and comparably count "cores". SPEC Fair use requirements Speaking as an individual, not speaking for SPEC and not speaking for my employer, I wonder whether Mr. Kharkovski's blog article, taken as a whole, meets the requirements of the SPEC Fair Use rule www.spec.org/fairuse.html section I.D.2. For example, Mr. Kharkovski's footnote (1) begins Results from http://www.spec.org as of 04/04/2013 Oracle SUN SPARC T5-8 449 EjOPS/core SPECjEnterprise2010 (Oracle's WLS best SPECjEnterprise2010 EjOPS/core result on SPARC). IBM Power730 823 EjOPS/core (World Record SPECjEnterprise2010 EJOPS/core result) The questionable tactic, from a Fair Use point of view, is that there is no such metric at the designated location. At www.spec.org, You can find the SPEC metric 57422.17 SPECjEnterprise2010 EjOPS for Oracle and You can also find the SPEC metric 13161.07 SPECjEnterprise2010 EjOPS for IBM. Despite the implication of the footnote, you will not find any mention of 449 nor anything that says 823. SPEC says that you can, under its fair use rule, derive your own values; but it emphasizes: "The context must not give the appearance that SPEC has created or endorsed the derived value." Substantiation and transparency Although SPEC disclaims responsibility for non-SPEC information (section I.E), it says that non-SPEC data and methods should be accurate, should be explained, should be substantiated. Unfortunately, it is difficult or impossible for the reader to independently verify the pricing: Were like units compared to like (e.g. list price to list price)? Were all components (hw, sw, support) included? Were all fees included? Note that when tpc.org shows IBM pricing, there are often items such as "PROCESSOR ACTIVATION" and "MEMORY ACTIVATION". Without the transparency of a detailed breakdown, the pricing claims are questionable. T5 claim for "Fastest Processor" Mr. Kharkovski several times questions Oracle's claim for fastest processor, writing You see, when you publish industry benchmarks, people may actually compare your results to other vendor's results. Well, as we performance people always say, "it depends". If you believe in performance-per-core as the primary way of looking at the world, then yes, the POWER7+ is impressive, spending its chip resources to support up to 32 threads (8 cores x 4 threads). Or, it just might be useful to consider performance-per-chip. Each SPARC T5 chip allows 128 hardware threads to be simultaneously executing (16 cores x 8 threads). The Industry Standard Benchmark that focuses specifically on processor chip performance is SPEC CPU2006. For this very well known and popular benchmark, SPARC T5: provides better performance than both POWER7 and POWER7+, for 1 chip vs. 1 chip, for 8 chip vs. 8 chip, for integer (SPECint_rate2006) and floating point (SPECfp_rate2006), for Peak tuning and for Base tuning. For example, at the 8-chip level, integer throughput (SPECint_rate2006) is: 3750 for SPARC 2170 for POWER7+. You can find the details at the March 2013 BestPerf CPU2006 page SPEC is a trademark of the Standard Performance Evaluation Corporation, www.spec.org. The two specific results quoted for SPECjEnterprise2010 are posted at the URLs linked from the discussion. Results for SPEC CPU2006 were verified at spec.org 1 July 2013, and can be rechecked here.

    Read the article

  • Drawing custom graphics on the iPhone: CALayer vs. CGContext

    - by Henry Cooke
    Hi all, I have an application in which I'm doing some custom drawing, a bunch of lines on a gradient background, like so (ignore the text, they're just UILabels): At the moment, that's all done by starting a new CGContext, drawing stuff into it with CGContextDrawLinearGradient and CGContextStrokePath, then finally saving the resulting image with UIGraphicsGetImageFromCurrentImageContext. The positioning info is calculated while I'm laying out those labels, so it'd be a PITA (and duplication of effort) to calculate it all over again when the containing UIView is drawn with drawRect, so I'm drawing it ahead of time into a UIImage. All works fine, so far so good. However, I have a sneaking suspicion that it may be more efficient to use CALayers to do this drawing. My (cursory) understanding of the difference between the two approaches is that a CALayer is more like a bunch of instructions to draw stuff, and so takes up less memory until it's actually drawn onscreen, whereas drawing everything into a UIImage ahead of time means that you've got a sodding great bitmap kicking around in memory all the time, whether it's drawn or not. Is that a correct understanding? What is generally considered to be the best way of drawing custom images on the iPhone?

    Read the article

  • CATiledLayer: Determining levelsOfDetail when in drawLayer

    - by Russell Quinn
    I have a CATiledLayer inside a UIScrollView and all is working fine. Now I want to add support for showing different tiles for three levels of zooming. I have set levelsOfDetail to 3 and my tile size is 300 x 300. This means I need to provide three sets of tiles (I'm supplying PNGs) to cover: 300 x 300, 600 x 600 and 1200 x 1200. My problem is that inside "(void)drawLayer:(CALayer *)layer inContext:(CGContextRef)ctx" I cannot work out which levelOfDetail is currently being drawn. I can retrieve the bounds currently required by using CGContextGetClipBoundingBox and usually this requests a rect for one of the above sizes, but at layer edges the tiles are usually smaller and therefore this isn't a good method. Basically, if I have set levelsOfDetail to 3, how do I find out if drawLayer is requesting level 1, 2 or 3 when it's called? Thanks, Russell.

    Read the article

< Previous Page | 28 29 30 31 32 33 34 35 36 37 38 39  | Next Page >