Search Results

Search found 7447 results on 298 pages for 'estrategia hardware'.

Page 14/298 | < Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >

  • "Dans 5 ans, softwares et hardware seront gratuits", prédit la Fondation Linux

    "Dans 5 ans, softwares et hardware seront gratuits", prédit la Fondation Linux De nouvelles informations sur l'évolution économique de Linux ont été annoncées. Les dernières tendances macro-économiques auraient joué en sa faveur, en allant dans le sens de l'open source. Linux est en effet gratuit, et, de plus, moins coûteux à faire tourner par rapport aux plateformes propriétaires ; que ce soit pour les particuliers ou pour les fabriquants d'ordinateurs. Et même, Google serait-elle la compagnie qu'elle est aujourd'hui si elle n'avait utilisé que des technologies propriétaires comme Microsoft.NET ? Chez Linux, on répond que c'est peu probable. Les fournisseurs de services, ou de cloud, doivent...

    Read the article

  • New Oracle VM Hardware Certifications

    - by Chris Kawalek
    We've received inquiries from the community on certification of Oracle VM 3.0 on HP Proliant systems. We're pleased to update that we've recently completed certification of the HP Proliant systems for Oracle VM 3.0. The newly certified systems are: ProLiant DL980 G7 Hewlett Packard Oracle VM 3.0 x86_64 ProLiant BL680c G7 Hewlett Packard Oracle VM 3.0 x86_64  ProLiant BL465c G7 Hewlett Packard Oracle VM 3.0 x86_64  ProLiant BL460c G7 Hewlett Packard Oracle VM 3.0 x86_64  ProLiant DL380 G7 Hewlett Packard Oracle VM 3.0 x86_64 See this Oracle VM Certified Hardware page for more details. For more information, please go to the Oracle Virtualization web page, or  follow us at :  Twitter   Facebook YouTube Newsletter

    Read the article

  • Ubuntu 12.10 install freezes at configuring hardware

    - by Max Keener
    I'm installing Ubuntu 12.10 (64 bit) from a bootable USB stick. At first I had trouble with a black screen after selecting 'install ubuntu'. I added nomodeset and xforcevesa to the options to fix that problem. Now when installing, it hangs at 'Configuring Hardware', specifically at ubuntu ubiquity: update-initramfs: Generating /boot/initrd.img-3.5.0-17-generic Specs: Asus UX32a DB51 Intel Core i5 3317U 1.7 GHz 4GB DDR3 RAM intel hd 4000 graphics 500 GB harddrive with 25 GB sandisk ssd I'm trying to install Ubuntu by itself right now on the SSD. I made custom partitions (100 mb EFI boot partition, 4GB swap space, 20GB ext4 mounted on '/') I've tried re-downloading the ubuntu iso and creating a new boot image on my flash drive and it results in the same problem. Thanks in advance for the help!

    Read the article

  • 11.04 Wireless is disabled by hardware switch Hp Compaq nc6220

    - by user75711
    Here is some backstory if it is helpful: I had Windows Vista on my computer and I wanted to use wubi to install Ubuntu, everything was fine until I booted into the OS. It said my wireless was disabled by a Hardware Switch. So I booted back into Vista and it disabled there too. I have gotten rid of vista and installed 11.04 instead. It still has this error and as far as I know there is no switch anywhere and the fn keys do nothing. My lshw is: *-network DISABLED description: Wireless Interface product: PRO/Wireless 2915ABG [Calexico2] Network Connection vendor: Intel Corporation physical id: 4 bus info: pci@0000:02:04.0 logical name: eth1 version: 05 serial: 00:15:00:0c:5d:62 width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver-ipw2200 driverversion=1.2.2kmprq firmware=ABG:9.0.5.27 (Dec 12 2007) latency=64 link=no maxlatency=24 mingnt=3 multicast=yes wireless=IEEE 802.11abg resources: irq:21 memory:d0400000-d0400fff Thanks

    Read the article

  • Neue Marketing Kits für Hardware

    - by A&C Redaktion
    Die Oracle Marketing-Kit sind ein beliebtes Instrument zur Vertriebsunterstützung. Stetig erweitert enthalten sie den Textentwurf für Emailing, Landigpad und ein Telemarketing-Script. Jetzt sind brandneue Kits u.a. in Deutsch für folgende Hardware-Lösungen verfügbar: Server & Storage: Improve Database Capacity Management with Oracle Storage and Hybrid Columnar Compression Server & Storage: Accelerating Database Test & Development with Sun ZFS Storage Appliance Server & Storage: Upgrade SAN Storage to Oracle Pillar Axiom Server & Storage: SPARC Refresh with Oracle Solaris Operating System Server & Storage: SPARC Server Refresh: The Next Level of Datacenter Performance with Oracle’s New SPARC Servers Server & Storage: Oracle Server Virtualization Server & Storage: Oracle Desktop Virtualization

    Read the article

  • How to (hardware) RAID 10 on Ubuntu 10.04 LTS with 4 drives and motherboard with RAID contoller

    - by lollercoaster
    I have 4 500GB hard drives. I set up a RAID 10 in BIOS, much like shown here: http://www.supermicro.com/manuals/other/RAID_SATA_ESB2.pdf Then I followed these instructions: http://www.unrest.ca/Knowledge-Base/configuring-mdadm-raid10-for-ubuntu-910 Basically I cannot get it to work. I go through the instructions when I get to the "partition" section of the install, creating 4 RAID 1's (2 partitions on each drive, one for primary and one for swap space), then combining to make a RAID 10. Unfortunately it still shows 2 partitions, one 500 GB and another being 36GB for some reason. Any ideas? I think best would be if anyone had found good instructions (step by step) for how to do this...I've been googling for hours and haven't found anything...

    Read the article

  • Hardware recommendation for Solaris 10 + ZFS data warehouse server.

    - by Justin
    The server would run a 2 drive (mirrored root pool for OS and master database segment). And would run individual zpools for each remaining drive (loss of data is acceptable). Initial requirements would be: 2x 7540 xeons (6 core) 32gig memory. 12 drives. A 4U/2U server (6/8 core and 2/4 sockets cpu support) with internal disks / or external JBOD. Capacity to house a disk per CPU core is important.

    Read the article

  • Hardware chose: ASUS Eee Pad Slider or ASUS Eee Pad Transformer for web development?

    - by JamesM
    I was just wondering out of the following Tablets which one seams better to get? I am a web-developer, Always using Unix/Linux/BSD, I want a tablet that has a keyboard. http://gdgt.com/asus/eee/pad/slider/ http://gdgt.com/asus/eee/pad/transformer/ http://www.tweaktown.com/news/18311/asus_eee_pad_slider_transformer_tablets_with_physical_keyboard/index.html I know both are similar, but not sure what one I should get. The Slider seems very nice but again the keyboard is fixed to the tablet unlike the Transformer. P.S: I'm going to use one of the above to showcase my programming work at school, as well as just being used as a cheaper notebook than the $300 Windows.7 locked down notebooks. By Locked down, I mean we pay $300 for them and after 3 years we can do what ever to them, they are Lenovo thinkpad mini-10 and What they have installed is all you get, they don't let us install what ever OS on them. And with the question on both of those links, I think that the transformer would be better but that is only taking in the fact of it being both a tablet and a notebook. What I really care about is power; which one is more powerful? It will be running kFreeBSD-Debian-Squeeze with Linux-Mint theme with several other packages. Though I'm not going to run Windows (which I feel is bloated), I still want power. To help keep my computer from slowing down with cache, I will have a cron.d/hourly script cleaning out the cache memory.

    Read the article

  • What hardware makes a good MongoDB Server ? Where to get it ?

    - by João Pinto Jerónimo
    Suppose you're on dell.com right now and you're buying a server to run your MongoDB database for your small startup. You will have to handle literally tens of thousands of writes and reads per minute (but small objects). Would you go for 2 processors ? Invest more on RAM ? I've heard (correct me if I'm wrong) MongoDB handles the most it can on the RAM and then flushes everything to the disk, in that case I should invest on a CPU with a large L2 cache, probably 40GB of RAM and a solid state drive.. right ? Would I be better off with a high end (~$11,309, 2 expensive processors, 96GB of RAM) server or 2x(~$6,419, 2 expensive processors, 12GB of RAM) servers ? Is Dell ok or do you have better sugestions ? (I'm outside the US, on Portugal)

    Read the article

  • How do I know if DirectX is using hardware acceleration or software rendering?

    - by JohnIdol
    Is there any DirectX diagnostcs tool which will allow me to understand if Graphics acceleration from my GPU is actually working or software rendering is kicking-in instead? I ask this because If I go properties (right click on desktop) -- settings -- I get an error saying my drivers are not working for my Intel Embedded GPU (Intel Embedded Graphics Driver - IEGD) and the system is defaulting to standard VGA drivers. I am on WinXP Professional.

    Read the article

  • Squeezing hardware

    - by [email protected]
    It's very common that high availability means duplicate hardware so costs grows up.Nowadays, CIOs and DBAs has the main challenge of reduce the money spent increasing the performance and the availability. Since Grid Infrastructure 11gR2, there is a new feature that helps them to afford this challenge: Server PoolsNow, in Grid Infrastructure 11gR2, you can define server pools across the cluster setting up the minimum number of servers, the maximum and how important is the pool.For example:Consider  that "Velasco, Boixeda & co"  has 3 apps in a 6 servers cluster.First One is the main core business appSecond one is Mid RangeAnd third it's a database not very important.We Define the following resource requirements for expected workload:1- Main App 2 servers required2- Mid Range App requires 1 server3- Is not a required app in case of disasterThe we define 3 server pools across the cluster:1- Main pool min two servers, max three servers, importance four2- Mid pool, min one server max two servers, importance two3- test pool,min zero servers, max one server, importance oneSo the initial configuration is:-Main pool has three servers-Mid pool has two servers-Test pool has one serverLogically, we can see the cluster like this:If any server fails, the following algorithm will be applied:1.-The server pool of least importance2.-IF server pools are of the same importance,   THEN then the Server Pool that has more than its defined minimum servers Is chosenHope it helps 

    Read the article

  • OpenGL: Attempt to allocate a texture to big for the current hardware

    - by AnonymousMan
    I'm getting the following error: java.io.IOException: Attempt to allocate a texture to big for the current hardware at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java:320) at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java:254) at org.newdawn.slick.opengl.InternalTextureLoader.getTexture(InternalTextureLoader.java:200) at org.newdawn.slick.opengl.TextureLoader.getTexture(TextureLoader.java:64) at org.newdawn.slick.opengl.TextureLoader.getTexture(TextureLoader.java:24) The image I'm trying to use is 128x128. System.out.println(GL11.glGetInteger(GL11.GL_MAX_TEXTURE_SIZE)); I get: 32. 32??!! My graphics card is AMD Radeon HD 7970M with 2048 MB GDDR5 RAM, I can run all the latest games in 1080p and 60fps with no problem, and those textures sure as hell doesn't look like they are 32x32 pixels to me! How can I fix this? -- Edit: Here's the chaos code I use to init OpenGL: Display.setDisplayMode(new DisplayMode(500,500)); Display.create(); if (!GLContext.getCapabilities().OpenGL11) { throw new Exception("OpenGL 1.1 not supported."); } Display.setTitle("Game"); glMatrixMode(GL_PROJECTION); glLoadIdentity(); GLU.gluPerspective(45, 1, 0.1f, 5000); Mouse.setGrabbed(true); glMatrixMode(GL_MODELVIEW); glLoadIdentity(); glEnable(GL_TEXTURE_2D); glClearColor(0, 0, 0, 0); glEnable(GL_DEPTH_TEST); glDepthFunc(GL_LEQUAL); glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_BLEND); glEnable(GL_POINT_SMOOTH); glEnable(GL_LINE_SMOOTH); glEnable(GL_POLYGON_SMOOTH); glEnable(GL_POLYGON_OFFSET_FILL); glShadeModel(GL_SMOOTH); Display is a LWJGL thing, it makes the OpenGL context and the window. Anyway, I don't think there's anything in the init code that can help me but you never know...

    Read the article

  • Touchpad hardware button disables keyboard too

    - by jjg
    I have an old but nice Samsung X50 running MM which has a key between the touchpad buttons which disables the touchpad. Very nice, no-one like to brush against the touchpad while typing. It seems to be a hardware feature -- a BIOS style window appears at the top left of the screen when you press it saying "touchpad off"; and when you press it again it says "touchpad on", and so it is, but now the keyboard has no effect in X, I can type nothing except to meta-ctl F1 to the console. After a reboot the problem persists; and the only way I have found to fix it is to blow away .gconf are replace it with a copy I made in happier times. Deleting/modifying .gconf/desktop/gnome/peripherals/touchpad/%gconf.xml does not fix the problem. There is no way to turn off the switch in BIOS without losing the touchpad. I would prise the thing out with a screwdriver if I could, but it's a work machine. This button is the bane of my life, hanging over me like a sword of Damocles.

    Read the article

  • JOGL hardware based shadow mapping - computing the texture matrix

    - by axel22
    I am implementing hardware shadow mapping as described here. I've rendered the scene successfully from the light POV, and loaded the depth buffer of the scene into a texture. This texture has correctly been loaded - I check this by rendering a small thumbnail, as you can see in the screenshot below, upper left corner. The depth of the scene appears to be correct - objects further away are darker, and that are closer to the light are lighter. However, I run into trouble while rendering the scene from the camera's point of view using the depth texture - the texture on the polygons in the scene is rendered in a weird, nondeterministic fashion, as shown in the screenshot. I believe I am making an error while computing the texture transformation matrix, but I am unsure where exactly. Since I have no matrix utilities in JOGL other then the gl[Load|Mult]Matrix procedures, I multiply the matrices using them, like this: void calcTextureMatrix() { glPushMatrix(); glLoadIdentity(); glLoadMatrixf(biasmatrix, 0); glMultMatrixf(lightprojmatrix, 0); glMultMatrixf(lightviewmatrix, 0); glGetFloatv(GL_MODELVIEW_MATRIX, shadowtexmatrix, 0); glPopMatrix(); } I obtained these matrices by using the glOrtho and gluLookAt procedures: glLoadIdentity() val wdt = width / 45 val hgt = height / 45 glOrtho(wdt, -wdt, -hgt, hgt, -45.0, 45.0) glGetFloatv(GL_MODELVIEW_MATRIX, lightprojmatrix, 0) glLoadIdentity() glu.gluLookAt( xlook + lightpos._1, ylook + lightpos._2, lightpos._3, xlook, ylook, 0.0f, 0.f, 0.f, 1.0f) glGetFloatv(GL_MODELVIEW_MATRIX, lightviewmatrix, 0) My bias matrix is: float[] biasmatrix = new float[16] { 0.5f, 0.f, 0.f, 0.f, 0.f, 0.5f, 0.f, 0.f, 0.f, 0.f, 0.5f, 0.f, 0.5f, 0.5f, 0.5f, 1.f } After applying the camera projection and view matrices, I do: glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR) glTexGenfv(GL_S, GL_EYE_PLANE, shadowtexmatrix, 0) glEnable(GL_TEXTURE_GEN_S) for each component. Does anybody know why the texture is not being rendered correctly? Thank you.

    Read the article

  • How to handle wildly varying rendering hardware / getting baseline

    - by edA-qa mort-ora-y
    I've recently started with mobile programming (cross-platform, also with desktop) and am encountering wildly differing hardware performance, in particular with OpenGL and the GPU. I know I'll basically have to adjust my rendering code but I'm uncertain of how to detect performance and what reasonable default settings are. I notice that certain shader functions are basically free in a desktop implemenation but can be unusable in a mobile device. The problem is I have no way of knowing what features will cause what performance issues on all the devices. So my first issue is that even if I allow configuring options I'm uncertain of which options I have to make configurable. I'm wondering also wheher one just writes one very configurable pipeline, or whether I should have 2 distinct options (high/low). I'm also unsure of where to set the default. If I set to the poorest performer the graphics will be so minimal that any user with a modern device would dismiss the game. If I set them even at some moderate point, the low end devices will basically become a slide-show. I was thinking perhaps that I just run some benchmarks when the user first installs and randomly guess what works, but I've not see a game do this before.

    Read the article

  • wireless is disabled by hardware lenovo 3000g430

    - by sudheer
    sir i have problem with my wifi switch sir please tell me solution for my problem (wifi is disabled by hardware). output of sudo lshw -C network is sudo] password for sudheer: *-network DISABLED description: Wireless interface product: BCM4312 802.11b/g LP-PHY vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:06:00.0 logical name: eth2 version: 01 serial: 00:21:00:72:3a:93 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=wl0 driverversion=5.100.82.38 latency=0 multicast=yes wireless=IEEE 802.11bg resources: irq:19 memory:f4700000-f4703fff *-network description: Ethernet interface product: NetLink BCM5906M Fast Ethernet PCI Express vendor: Broadcom Corporation physical id: 0 bus info: pci@0000:07:00.0 logical name: eth0 version: 02 serial: 00:1e:68:ad:24:0b size: 100Mbit/s capacity: 100Mbit/s width: 64 bits clock: 33MHz capabilities: pm vpd msi pciexpress bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=tg3 driverversion=3.121 duplex=full firmware=sb v3.04 ip=172.16.52.79 latency=0 link=yes multicast=yes port=twisted pair speed=100Mbit/s resources: irq:47 memory:f4600000-f460ffff output of iwconfig is lo no wireless extensions. eth2 IEEE 802.11 Access Point: Not-Associated Link Quality:5 Signal level:0 Noise level:0 Rx invalid nwid:0 invalid crypt:0 invalid misc:0 eth0 no wireless extensions. sudheer@sudheer:~$ sudo iwlistscanning sudo: iwlistscanning: command not found ***sudheer@sudheer:~$ sudo iwlist scanning*** lo Interface doesn't support scanning. eth2 Failed to read scan data : Invalid argument eth0 Interface doesn't support scanning.

    Read the article

  • preseeded installation keeps asking for confirmation while creating RAID-Partitions on certain hardware-platform

    - by Marc Shennon
    I am aware of the partman-md/confirm_nooverwrite thing, that was the solution to most of this problems in the past. The thing is, that the preseed-file works for almost all hardware-platforms I tested, but only for one (Primergy MX130) it keeps asking for confirmation, before writing the partition-layout to the disks. All machines I tested are running with two SATA Disks, nothing special. I'm not really sure, what information could be needed in order to investigate the cause of this behaviour, but I would of course be willing to provide more information, if someone has an idea. Relevant part of the preseed file is the following: d-i partman-auto/disk string /dev/sda /dev/sdb d-i partman-auto/method string raid d-i partman-md/confirm boolean true d-i partman-partitioning/confirm_write_new_label boolean true d-i partman-md/device_remove_md boolean true d-i partman/choose_partition select finish d-i partman-md/confirm_nooverwrite boolean true # Write the changes to disks? d-i partman/confirm boolean true d-i mdadm/boot_degraded boolean true # RECIPE # Next you need to specify the physical partitions that will be used. d-i partman-auto/expert_recipe string \ multiraid :: \ 500 10000 1000000000 raid $lvmignore{ }\ $primary{ } \ method{ raid } \ . \ 512 1000 786 raid $lvmignore{ }\ $primary{ } \ method{ raid } \ . \ 8192 10240 10240 raid $lvmignore{ }\ method{ raid } \ . # Parameters are: # <raidtype> <devcount> <sparecount> <fstype> <mountpoint> <devices> <sparedevices> d-i partman-auto-raid/recipe string \ 1 2 0 ext4 / /dev/sda1#/dev/sdb1 . \ 1 2 0 ext2 /boot /dev/sda2#/dev/sdb2 . \ 1 2 0 swap - /dev/sda5#/dev/sdb5 .

    Read the article

  • My wireless has suddenly became disabled by hardware switch, BIOS, rfkill, fn+f8 do nothing

    - by cwwk
    I have a toshiba l655d-s5145. There is no physical toggle for the wireless, although the f8 key is supposed to do the trick. It doesn't. The wireless has been working since October, and suddenly, nothing. RFKILL reports that the wireless is hard blocked, but unblock wifi, unblock 0, unblock all do nothing. I inserted a usb dongle, and that is also disabled by hardware switch, although rfkill reports that it is neither hard nor soft blocked. My onboard wireless is 02:00.0 Network controller: Realtek Semiconductor Co., Ltd. RTL8188CE 802.11b/g/n WiFi Adapter (rev 01) according to lspci. Lsmod reports that these drivers are loaded. rtl8192c_common 75767 1 rtl8192ce rtlwifi 110972 1 rtl8192ce I resorted to reformatting the drive and reinstalling, but that also did not work. In BIOS I restored system defaults as there is no specific entry for wifi. Before reinstalling a pure xubuntu I went into Unity and saw that the airplane mode was on and despite being toggled off, it returned to the on position. I'm not sure where to find airplane mode in Xubuntu. What else can I do? I need my wireless.

    Read the article

  • Shader compile log depending on hardware

    - by dreta
    I'm done with the core of my graphics engine and I'm testing it on every platform I can get my hands on. Now, what I noticed is that different drivers return different shader and program compile log content. For example, on my friend's laptop if you successfuly compile a shader then the log is simply empty. However on my PC I get some useful information along with it. So if I compile a vertex shader, I'll get: Vertex shader was successfully compiled to run on hardware. Which isn't that impressive, but is what happens when I compile a program. On my friend's computer the log is empty, since the program compiles. However on my own computer I get: Vertex shader(s) linked, fragment shader(s) linked. Which is awesome, because I'm attaching a geometry shader with 0 (I have a geometry shader file with trash, so it doesn't compile and the pointer is set to 0), and the compiler just tells me which shaders linked. Now it got me thinking, if I was going to buy a graphics card, is there a way for me to get the information about whether or not I'll get this "extended" compile information? Maybe it's vendor specific? Now I don't expect an answer TBH, this seems a bit obscure, but maybe somebody has any experience with this and could post it.

    Read the article

  • Is your Xcode4 stable?

    - by Eonil
    I have upgraded to Xcode4, and I'm experiencing unbelievable situation. Xcode4 crashes per 5 minute. Incredibly slow. Almost impossible to use. Maybe the problem is my hardware configuration. I'm using MacBook Air 3rd with 2GB ram with SSD. It was just fine with Xcode3, but now, it consumes all of memory and crashes too often. Does your Xcode4 stable? If so, please let me know what's your hardware configuration. I want to know whether this problem is caused by hardware configuration or not to decide buy a new mac.

    Read the article

  • iscsitarget suddenly broken after upgrade of the 12.04 Hardware Stack

    - by RapidWebs
    After an upgrade to the latest Hardware Stack using Ubuntu 12.04, my iscsi service is not longer operational. The error from the service is such: FATAL: Module iscsi_trgt not found. I have learned that I might need to reinstall the package iscsitarget-dkms. this package builds a driver or something during installation, from source. During this build process, it reports and error, and now has also broke my package manager. Here is the relevant output: Building module: cleaning build area.... make KERNELRELEASE=3.13.0-34-generic -C /lib/modules/3.13.0-34-generic/build M=/var/lib/dkms/iscsitarget/1.4.20.2/build........(bad exit status: 2) Error! Bad return status for module build on kernel: 3.13.0-34-generic (i686) Consult /var/lib/dkms/iscsitarget/1.4.20.2/build/make.log for more information. Errors were encountered while processing: iscsitarget E: Sub-process /usr/bin/dpkg returned an error code (1) and this is the information provided by make.log: or iscsitarget-1.4.20.2 for kernel 3.13.0-34-generic (i686) Fri Aug 15 22:07:15 EDT 2014 make: Entering directory /usr/src/linux-headers-3.13.0-34-generic LD /var/lib/dkms/iscsitarget/1.4.20.2/build/built-in.o LD /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/built-in.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/tio.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/iscsi.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/nthread.o CC [M] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.o /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c: In function ‘worker_thread’: /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:73:28: error: void value not ignored as it ought to be /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:74:3: error: implicit declaration of function ‘get_io_context’ [-Werror=implicit-function-declaration] /var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.c:74:21: warning: assignment makes pointer from integer without a cast [enabled by default] cc1: some warnings being treated as errors make[2]: * [/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel/wthread.o] Error 1 make[1]: * [/var/lib/dkms/iscsitarget/1.4.20.2/build/kernel] Error 2 make: * [module/var/lib/dkms/iscsitarget/1.4.20.2/build] Error 2 make: Leaving directory `/usr/src/linux-headers-3.13.0-34-generic' I am at a loss on how to resolve this issue. any help would be appreciated!

    Read the article

  • Manic Monday - More OpenWorld Solaris Sessions: Developers, Cloud, Customer Insights, Hardware Optimization

    - by Larry Wake
    We're overflowing with Monday sessions; literally more than one person can take in. Learn more about what's new in Oracle Solaris Studio, hear about the latest x86 and SPARC hardware optimizations, get some insights on cloud deployment strategies, and find out from your peers what they're doing with Oracle Solaris. If you're an OpenWorld attendee, go to to Schedule Builder to guarantee your space in any session or lab. See yesterday's blog post and the "Focus on Oracle Solaris" guide for even more sessions. Monday, October 1st: 10:45 AM - Maximizing Your SPARC T4 Oracle Solaris Application Performance(CON6382,  Marriott Marquis - Golden Gate C3) Hear how customers and commercial software partners have reached peak performance on SPARC T4 servers and engineered systems with Oracle Solaris Studio and its latest tools for analyzing, reporting, and improving runtime performance: Autoparallelizing, high-performance compilers Performance Analyzer (used to find performance hotspots) Thread Analyzer (to expose data races and deadlocks) Code Analyzer (used to discover latent memory corruption issues) 10:45 Cloud Formation: Implementing IaaS in Practice with Oracle Solaris(CON8787, Moscone South 302) Decisions, decisions--at the same time, we've got a session that covers why Oracle Solaris is the ideal OS for public or private clouds, IaaS or PaaS, with built-in features for elastic infrastructure, unrivaled security, superfast installation and deployment, nonstop availability, and crystal-clear observability. This session will include a customer study on how Oracle Solaris is used in the cloud today to implement the Oracle stack. 12:15 PM - Customer Insight: Oracle Solaris on Oracle Exadata, Oracle Exalogic, and SPARC SuperCluster(CON8760, Moscone South 270) Hear from customers what benefits they have realized from using the Oracle stack on Oracle Exadata and Oracle’s SPARC SuperCluster and from using Oracle Solaris on those engineered systems, taking advantage of built-in lightweight OS virtualization (Zones), enterprise reliability and scale, and other key features. 1:45 PM - Case Study: Mobile Tornado Uses Oracle Technology for Better RAS and TCO?(CON4281, Moscone West 2005) Mobile Tornado develops and markets instant communication platforms, replacing traditional radio networks with cellular networks. Its critical concern is uptime. Find out how they've used Oracle Solaris, Netra SPARC T4, and Oracle Solaris Cluster, including Oracle Solaris ZFS and Zones, for their Oracle Database deployments to improve reliability and drive down cost. 3:15 PM - Technical Panel: Developing High Performance Applications on Oracle Solaris(CON7196, Marriott Marquis - Golden Gate C2) Engineers from the Oracle Solaris, Oracle Database, and Oracle Tuxedo development teams, and Oracle ISV Engineering discuss how they develop high-performance enterprise applications that take advantage of Oracle's SPARC and x86 servers, with Oracle Solaris Studio and new Oracle Solaris 11 features. Topics will include developer tools, parallel frameworks, best practices, and methodologies, as well as insights and case studies on parallelizing and optimizing application performance on Oracle Solaris. Bring your best questions! 3:15 PM -  x86 Power Management with Oracle Solaris: Current State, Opportunities, and Future(CON6271, Moscone West 2012) Another option for this time slot: learn about how Intel Xeon and Oracle Solaris work together to reduce server power consumption. This presentation addresses some of the recent power management improvements in Oracle Solaris, opportunities to further improve energy efficiency, and some future directions for Oracle Solaris power management.

    Read the article

  • RPi and Java Embedded GPIO: It all begins with hardware

    - by hinkmond
    So, you want to connect low-level peripherals (like blinky-blinky LEDs) to your Raspberry Pi and use Java Embedded technology to program it, do you? You sick foolish masochist. No, just kidding! That's awesome! You've come to the right place. I'll step you though it. And, as with many embedded projects, it all begins with hardware. So, the first thing to do is to get acquainted with the GPIO header on your RPi board. A "header" just means a thingy with a bunch of pins sticking up from it where you can connect wires. See the the red box outline in the photo. Now, there are many ways to connect to that header outlined by the red box in the photo (which the RPi folks call the P1 header). One way is to use a breakout kit like the one at Adafruit. But, we'll just use jumper wires in this example. So, to connect jumper wires to the header you need a map of where to connect which wire. That's why you need to study the pinout in the photo. That's your map for connecting wires. But, as with many things in life, it's not all that simple. RPi folks have made things a little tricky. There are two revisions of the P1 header pinout. One for older boards (RPi boards made before Sep 2012), which is called Revision 1. And, one for those fancy 512MB boards that were shipped after Sep 2012, which is called Revision 2. So, first make sure which board you have: either you have the Model A or B with 128MB or 256MB built before Sep 2012 and you need to look at the pinout for Rev. 1, or you have the Model B with 512MB and need to look at Rev. 2. That's all you need for now. More to come... Hinkmond

    Read the article

< Previous Page | 10 11 12 13 14 15 16 17 18 19 20 21  | Next Page >