Search Results

Search found 7447 results on 298 pages for 'estrategia hardware'.

Page 213/298 | < Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >

  • Creating a retro-style palette swapping effect in OpenGL

    - by Zack The Human
    I'm working on a Megaman-like game where I need to change the color of certain pixels at runtime. For reference: in Megaman when you change your selected weapon then main character's palette changes to reflect the selected weapon. Not all of the sprite's colors change, only certain ones do. This kind of effect was common and quite easy to do on the NES since the programmer had access to the palette and the logical mapping between pixels and palette indices. On modern hardware, though, this is a bit more challenging because the concept of palettes is not the same. All of my textures are 32-bit and do not use palettes. There are two ways I know of to achieve the effect I want, but I'm curious if there are better ways to achieve this effect easily. The two options I know of are: Use a shader and write some GLSL to perform the "palette swapping" behavior. If shaders are not available (say, because the graphics card doesn't support them) then it is possible to clone the "original" textures and generate different versions with the color changes pre-applied. Ideally I would like to use a shader since it seems straightforward and requires little additional work opposed to the duplicated-texture method. I worry that duplicating textures just to change a color in them is wasting VRAM -- should I not worry about that?

    Read the article

  • hProduct-microformats not work in google

    - by silverfox
    I'm trying to work with hProduct was testing tool for google microformats (http://www.google.com/webmasters/tools/richsnippets), but it is not recognizing the data: does not recognize the photo does not recognize the price does not recognize the category only recognizes the rating HTML: <div class="hproduct"> <span class="brand">ACME</span> <span class="fn">Executive Anvil</span> <img class="photo" src="http://microformats.org/wiki/skins/Microformats/images/logo.gif" /> <span class="review hreview-aggregate"> Average rating: <span class="rating">4.4</span>, based on <span class="count">89 </span> reviews </span> Regular price: $179.99 Sale: $<span class="price">119.99</span> (Sale ends 5 November!) <span class="description">Sleeker than ACME's Classic Anvil, the Executive Anvil is perfect for the business traveler looking for something to drop from a height.</span> Category: <span class="category"> <span class="value-title" title="Hardware > Tools > Anvils">Anvils</span> </span> </div> and still shows this warning: waring: In order to generate a preview with rich snippets, either price or review or availability needs to be present. I used google's own example: http://support.google.com/webmasters/bin/answer.py?hl=en&answer=186036 I also tested the microformas.org: http://microformats.org/wiki/google-rich-snippets-examples

    Read the article

  • Exalytics Increases Customer Revenue, and Saves Time, Risk & Cost

    - by Mike.Hallett(at)Oracle-BI&EPM
    We are getting some great proof point stories now from our customers who are succeeding with the Exalytics in-memory system for OBI and Essbase.  See below for some recent testimony: San Diego Unified School District Harnesses Attendance, Procurement, and Operational Data with Oracle Exalytics, Generating $4.4 Million in Savings: according to independent assessment by Mainstay Salire, the district is on track to achieve substantial benefits from the Oracle Exalytics solution, including an $8.25 million increase in attendance revenue, $75,000 a year savings in operational efficiencies, and $1 million in hardware cost avoidance. NilsonGroup chooses Oracle Exalytics In-Memory Machine as their solution to access critical data to keep its stores competitive with real-time Mobile BI: it took only “3 days to get up and running” with Exalytics.  Video Nykredit, in the Danish Financial Sector, describes their experiences from testing the Exalytics Business Intelligence Machine: “it was up and running within 4 days” with “more intuitive dashboards” and “up to 70x better performance” and “cheaper maintenance and lower total cost of ownership”. Video Sodexo chose Oracle Exalytics as their business analytics platform; accelerating Essbase “more than 8x” performance for more than 2,000 Excel-addin users, “significantly changing how people in information management now deal with data”.  Video Polk, Savvis, Nykredit, and Key Energy describe testing of the Oracle Exalytics In-Memory Machine: to “reach more users than we ever have before”, “to fly through the data without impeding the analytic process”, “drive our enterprise groups into this tool instead of having departmental solutions”, and the “advanced visualisation this product enables”.  Video

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Exadata X3 Sales/Pre-Sales/Support Resell Rights Enablement Day

    - by mseika
    Dear Partner, Avnet and Oracle would like to invite you to a FREE Exadata X3 Sales/Pre-Sales/Support Resell Rights Enablement Day which is taking place on Wednesday 16th January 2013 at the Oracle London City offices in Moorgate. We will give you the opportunity to get an in depth understanding of how hardware and software is engineered to work together to create the power and scalability of Exadata. The session will focus on Oracle Exadata fundamentals, features, components and capabilities. The event will be a day long and will give you the opportunity to put any questions to the presenters that will help you understand how to spot an Exadata opportunity and position Exadata in an opportunity.   Register Now Register now or call our Hotline on 01925 856999. When Wednesday 16th January 2013 Duration: 9.30am to 17.00pm Where Oracle Corporation UK Ltd. One South PlaceLondon EC2M 2RB. For directionsplease see thelocation map.   Cost No charge Contact Us Avnet Technology Solutions LimitedClarity House103 Dalton AvenueBirchwood ParkWarringtonWA3 6YBUKT: 01925 856900 F: 01925 856901 E: [email protected] Or find us online:Avnet websiteLinkedInTwitterFacebook

    Read the article

  • OpenGL CPU vs. GPU

    - by Nitrex88
    So I've always been under the impression that doing work on the GPU is always faster than on the CPU. Because of this, in OpenGL, I usually try to do intensive tasks in shaders so they get the speed boost from the GPU. However, now I'm starting to realize that some things simply work better on the CPU and actually perform worse on the GPU (particularly when a geometry shader is involved). For example, in a recent project I did involving procedurally generated terrain, I tried passing a grid of single triangles into a geometry shader, and tesselated each of these triangles into quads with 400 vertices whose height was determined by a noise function. This worked fine, and looked great, but easily maxed out the GPU with only 25 base triangles and caused a very slow framerate. I then discovered that tesselating on the CPU instead, and setting the height (using noise function) in the vertex shader was actually faster! This prompted me to question the benefits of using the GPU as much as possible... So, I was wondering if someone could describe the general pros and cons of using the GPU vs CPU for intensive graphics tasks. I know this mainly comes down to what your trying to achieve, so if necessary, use the above scenario to discuss why the "CPU + vertex shader" was actually faster than doing everything in the geometry shader on the GPU. It's possible my hardware (newest macbook pro) isn't optomized well for the geometry shader (thus causing the slow framerate). Also, I read that the vertex shader is very good with parallelism, and would love a quick explanation of how this may have played a role in speeding up my procedural terrain. Any info/advice about CPU/GPU/shaders would be awesome!

    Read the article

  • Partner Webcast – Oracle Exadata X3 Database In-Memory Machine - Next-Generation Technologies Update - 20 Dec 2012

    - by Thanos
    Oracle’s next-generation database machine, Oracle Exadata X3, combines massive memory and low-cost disks to deliver even faster performance and greater storage capabilities at the lowest cost, making it the ideal database platform for the varied and unpredictable workloads of cloud computing. Oracle Exadata is available in multiple configurations including a low-cost eighth-rack configuration, so you can start small and grow at your own pace. We have also introduced new migration services designed to streamline implementation thereby saving you time and money. If your IT department is expected to deliver business value—or even drive business growth—then you’ll want to join us for a live Webcast discussing how the new Oracle Exadata X3 can help you transform data management.  Agenda: Oracle Exadata Evolution Oracle Exadata X3 Database In-Memory Machine Hardware Update Software Update Exadata Unique Next Generation Technologies Getting on board Oracle Exadata Q&A Delivery Format This FREE online LIVE eSeminar will be delivered over the Web. Registrations received less than 24hours prior to start time may not receive confirmation to attend. Thursday, December 20th, 10am CET (9am GMT) Duration: 1 hour Register Now! For any questions please contact us at [email protected] Visit our ISV Migration Center blog Or Follow us @oracleimc to learn more on Oracle Technologies, upcoming partner webcasts and events. Existing content available YouTube - SlideShare - Oracle Mix.

    Read the article

  • NVIDIA error "fallen off the bus"

    - by yurividal
    i have been having a Serious Issue with my LG notebook and its Nvidia Geforce 310M GPU. It usualy (99% of the time) happens when i leave the computer idle for a while, but it has also happened sometimes while i was using the PC. Suddenly, (usualy when computer is idle) the screen goes black, and the pc freezes completely on the black screen. (not even ping responses). The only sollution is to Hard Reset the machine. When analizing the syslog, i see the following error: Sep 18 20:58:08 yuri-notebook kernel: [ 1936.510073] NVRM: GPU at 0000:01:00.0 has fallen off the bus. Sep 18 20:58:08 yuri-notebook kernel: [ 1936.510087] NVRM: GPU at 0000:01:00.0 has fallen off the bus. Sep 18 20:58:08 yuri-notebook kernel: [ 1936.510157] delay: estimated 354, actual 1 Sep 18 20:58:08 yuri-notebook kernel: [ 1936.510173] delay: estimated 353, actual 0 I have already tryed different versions of the Nvidia Drivers, and also tryed removing each of my 2 DDR3 memories. The problem does not seem to be hardware, because when i boot into windows 7, it works normaly, for days. I am desperate with this problem, because it makes my Ubuntu practicaly unusable. Thanks in advance, Yuri

    Read the article

  • BCM4306, b43 legacy driver installed, but "firmware missing" error

    - by ScrollerBlaster
    I have the following wireless hardware on a Compaq evo N600c laptop running lubuntu latest. ciaran@compaq:~$ lspci -vvnn | grep 14e4 03:00.0 Network controller [0280]: Broadcom Corporation BCM4306 802.11b/g Wireless LAN Controller [14e4:4320] (rev 03) Following the instructions from here : http://www.linuxwireless.org/en/users/Drivers/b43#b43_and_b43legacy I opted for the legacy firmware installer, following instructions for internet install to the letter (with no errors). i.e. I successfully installed sudo apt-get install firmware-b43legacy-installer In the nm-applet I now get Wireless networks device not ready (firmware missing) I open up Additional Drivers, but the list is empty. I have commented out this line in /etc/modprobe.d/blacklist.conf #blacklist bcm43xx Contents of firmware dir: ciaran@compaq:/etc/modprobe.d$ sudo ls /lib/firmware/b43legacy/ [sudo] password for ciaran: a0g0bsinitvals2.fw a0g0initvals5.fw b0g0bsinitvals2.fw b0g0initvals5.fw ucode11.fw ucode5.fw a0g0bsinitvals5.fw a0g1bsinitvals5.fw b0g0bsinitvals5.fw pcm4.fw ucode2.fw a0g0initvals2.fw a0g1initvals5.fw b0g0initvals2.fw pcm5.fw ucode4.fw Edit: from dmesg: [ 4460.193382] b43-phy0 ERROR: Firmware file "b43/ucode5.fw" not found [ 4460.193393] b43-phy0 ERROR: Firmware file "b43-open/ucode5.fw" not found [ 4460.193401] b43-phy0 ERROR: You must go to http://wireless.kernel.org/en/users/Drivers/b43#devicefirmware and download the correct firmware for this driver version. Please carefully read all instructions on this website. Yours in hope.

    Read the article

  • Why Are Minimized Programs Often Slow to Open Again?

    - by Jason Fitzpatrick
    It seems particularly counterintuitive: you minimize an application because you plan on returning to it later and wish to skip shutting the application down and restarting it later, but sometimes maximizing it takes even longer than launching it fresh. What gives? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-driven grouping of Q&A web sites. The Question SuperUser reader Bart wants to know why he’s not saving any time with application minimization: I’m working in Photoshop CS6 and multiple browsers a lot. I’m not using them all at once, so sometimes some applications are minimized to taskbar for hours or days. The problem is, when I try to maximize them from the taskbar – it sometimes takes longer than starting them! Especially Photoshop feels really weird for many seconds after finally showing up, it’s slow, unresponsive and even sometimes totally freezes for minute or two. It’s not a hardware problem as it’s been like that since always on all on my PCs. Would I also notice it after upgrading my HDD to SDD and adding RAM (my main PC holds 4 GB currently)? Could guys with powerful pcs / macs tell me – does it also happen to you? I guess OSes somehow “focus” on active software and move all the resources away from the ones that run, but are not used. Is it possible to somehow set RAM / CPU / HDD priorities or something, for let’s say, Photoshop, so it won’t slow down after long period of inactivity? So what is the deal? Why does he find himself waiting to maximize a minimized app? The Answer SuperUser contributor Allquixotic explains why: Summary The immediate problem is that the programs that you have minimized are being paged out to the “page file” on your hard disk. This symptom can be improved by installing a Solid State Disk (SSD), adding more RAM to your system, reducing the number of programs you have open, or upgrading to a newer system architecture (for instance, Ivy Bridge or Haswell). Out of these options, adding more RAM is generally the most effective solution. Explanation The default behavior of Windows is to give active applications priority over inactive applications for having a spot in RAM. When there’s significant memory pressure (meaning the system doesn’t have a lot of free RAM if it were to let every program have all the RAM it wants), it starts putting minimized programs into the page file, which means it writes out their contents from RAM to disk, and then makes that area of RAM free. That free RAM helps programs you’re actively using — say, your web browser — run faster, because if they need to claim a new segment of RAM (like when you open a new tab), they can do so. This “free” RAM is also used as page cache, which means that when active programs attempt to read data on your hard disk, that data might be cached in RAM, which prevents your hard disk from being accessed to get that data. By using the majority of your RAM for page cache, and swapping out unused programs to disk, Windows is trying to improve responsiveness of the program(s) you are actively using, by making RAM available to them, and caching the files they access in RAM instead of the hard disk. The downside of this behavior is that minimized programs can take a while to have their contents copied from the page file, on disk, back into RAM. The time increases the larger the program’s footprint in memory. This is why you experience that delay when maximizing Photoshop. RAM is many times faster than a hard disk (depending on the specific hardware, it can be up to several orders of magnitude). An SSD is considerably faster than a hard disk, but it is still slower than RAM by orders of magnitude. Having your page file on an SSD will help, but it will also wear out the SSD more quickly than usual if your page file is heavily utilized due to RAM pressure. Remedies Here is an explanation of the available remedies, and their general effectiveness: Installing more RAM: This is the recommended path. If your system does not support more RAM than you already have installed, you will need to upgrade more of your system: possibly your motherboard, CPU, chassis, power supply, etc. depending on how old it is. If it’s a laptop, chances are you’ll have to buy an entire new laptop that supports more installed RAM. When you install more RAM, you reduce memory pressure, which reduces use of the page file, which is a good thing all around. You also make available more RAM for page cache, which will make all programs that access the hard disk run faster. As of Q4 2013, my personal recommendation is that you have at least 8 GB of RAM for a desktop or laptop whose purpose is anything more complex than web browsing and email. That means photo editing, video editing/viewing, playing computer games, audio editing or recording, programming / development, etc. all should have at least 8 GB of RAM, if not more. Run fewer programs at a time: This will only work if the programs you are running do not use a lot of memory on their own. Unfortunately, Adobe Creative Suite products such as Photoshop CS6 are known for using an enormous amount of memory. This also limits your multitasking ability. It’s a temporary, free remedy, but it can be an inconvenience to close down your web browser or Word every time you start Photoshop, for instance. This also wouldn’t stop Photoshop from being swapped when minimizing it, so it really isn’t a very effective solution. It only helps in some specific situations. Install an SSD: If your page file is on an SSD, the SSD’s improved speed compared to a hard disk will result in generally improved performance when the page file has to be read from or written to. Be aware that SSDs are not designed to withstand a very frequent and constant random stream of writes; they can only be written over a limited number of times before they start to break down. Heavy use of a page file is not a particularly good workload for an SSD. You should install an SSD in combination with a large amount of RAM if you want maximum performance while preserving the longevity of the SSD. Use a newer system architecture: Depending on the age of your system, you may be using an out of date system architecture. The “system architecture” is generally defined as the “generation” (think generations like children, parents, grandparents, etc.) of the motherboard and CPU. Newer generations generally support faster I/O (input/output), better memory bandwidth, lower latency, and less contention over shared resources, instead providing dedicated links between components. For example, starting with the “Nehalem” generation (around 2009), the Front-Side Bus (FSB) was eliminated, which removed a common bottleneck, because almost all system components had to share the same FSB for transmitting data. This was replaced with a “point to point” architecture, meaning that each component gets its own dedicated “lane” to the CPU, which continues to be improved every few years with new generations. You will generally see a more significant improvement in overall system performance depending on the “gap” between your computer’s architecture and the latest one available. For example, a Pentium 4 architecture from 2004 is going to see a much more significant improvement upgrading to “Haswell” (the latest as of Q4 2013) than a “Sandy Bridge” architecture from ~2010. Links Related questions: How to reduce disk thrashing (paging)? Windows Swap (Page File): Enable or Disable? Also, just in case you’re considering it, you really shouldn’t disable the page file, as this will only make matters worse; see here. And, in case you needed extra convincing to leave the Windows Page File alone, see here and here. Have something to add to the explanation? Sound off in the the comments. Want to read more answers from other tech-savvy Stack Exchange users? Check out the full discussion thread here.     

    Read the article

  • How to use the Raring/Saucy netboot installer to install Precise?

    - by mikepurvis
    We have a Haswell motherboard with onboard ethernet controllers which are not supported in the Precise (3.2) kernel. However, we're using netboot installation, and we'd really like to stick with the LTS version. Once the Precise install is completed, we can install the linux-generic-lts-saucy package, which gets us the ethernet hardware support which is ultimately required. So, our options are: Plug in a USB-Ethernet (or even wifi) dongle, perform the install that way. Modify the Precise installer to somehow include the required driver (a udeb, or some early_command invocation?) Modify the Raring installer (3.8 kernel, which supports the device) to instead install Precise. If it's possible the third option seems like the simplest and most logical to me. Now, we are already using the precise-updates installer (Aug 2013), as opposed to the original April 2012 installer. However, the precise-updates installer still appears to use the 3.2 kernel. I'm already comfortable with preseeding and modifying the netboot initrd. So my question is, can I somehow modify the Raring/Saucy netboot initrd to instead install Precise? Thanks.

    Read the article

  • RPi and Java Embedded GPIO: Using Java to read input

    - by hinkmond
    Now that we've learned about using Java code to control the output of the Raspberry Pi GPIO ports (by lighting up LEDs from a Java app on the RPi for now and noting in the future the same Java code can be used to drive industrial automation or medical equipment, etc.), let's move on to learn about reading input from the RPi GPIO using Java code. As before, we need to start out with the necessary hardware. For this exercise we will connect a Static Electricity Detector to the RPi GPIO port and read the value of that sensor using Java code. The circuit we'll use is from William J. Beaty and is described at this Web link. See: Static Electricity Detector He calls it an "Electric Charge" detector, which is a bit misleading. A Field Effect Transistor is subject to nearby electro-magnetic fields, such as a static charge on a nearby object, not really an electric charge. So, this sensor will detect static electricity (or ghosts if you are into paranormal activity ). Take a look at the circuit and in the next blog posts we'll step through how to connect it to the GPIO port of your RPi and then how to write Java code to access this fun sensor. Hinkmond

    Read the article

  • What happened to the Journal of Game Development?

    - by Ricket
    The lengthy mission statement from its website states: The lack of game-specific research has prevented many in the academic community from embracing game development as a serious field of study. The Journal of Game Development (JOGD), however, provides a much-needed, peer-reviewed, medium of communication and the raison d'etre for serious academic research focused solely on game-related issues. The JOGD provides the vehicle for disseminating research and findings indigenous to the game development industry. It is an outlet for peer-reviewed research that will help validate the work and garner acceptance for the study of game development by the academic community. JOGD will serve both the game development industry and academic community by presenting leading-edge, original research, and theoretical underpinnings that detail the most recent findings in related academic disciplines, hardware, software, and technology that will directly affect the way games are conceived, developed, produced, and delivered. The Journal of Game Development was established in 2003. It's hard to find any information about the issues but at four issues per year, I estimate the last issue was distributed sometime in 2005 or 2006. It had a good editorial board of college professors and a founding editor from Ubisoft. The list of articles looks good. The price was reasonable. So what happened to it? Its website recently went down but you can see the last Archive.org version. The editor-in-chief is a professor at my school so I intend to ask him in person in a week or two, but I thought I'd see what you might be able to dig up about it first. Of course I will be sure to add an answer with his official word on the matter at that time.

    Read the article

  • How can create the smallest possible mirror of the archive?

    - by Registered User
    I need to create an http url at my laptop to have a Ubuntu installation begin within my laptop on a Xen environment. This is how the final thing will look like. The host and client are both going to be my laptop, I Googled and came across apt-mirror and some other packages. I do not want to archive entire 15 GB Ubuntu repositories on my machine. It is not possible to use a CD,ISO,loop mounted disk (reason mentioned below). I have tried using netboot image on local machine which failed because if you are attempting to create a virtual machine on a hardware which does not support VT virt-manager installer necessarily needs a URL of this sort http://archive.ubuntu.com/ubuntu/dists/hardy/main/installer-i386/current/images/netboot/ Any other option to create guest OS is simply grayed out. The unfortunate part is my Ethernet connections do not work when I boot with Xen-4.0 and a pv-ops Dom0 kernel from Jeremy's tree. Which is where I have to do this work. So I have to create a URL structure which is similar to Ubuntu mirrors. So how can I do this in bare minimum so that at least the console boots and once the console comes I can do some work.

    Read the article

  • Great Example of a Simple Cost-Benefit Analysis

    - by BuckWoody
    I saw a post the other day that you should definitely go check out. It’s a cost/benefit decision, and although the author gives it a quick treatment and doesn’t take all points in the decision into account, you should focus on the process he follows. It’s a quick and simple example of the kind of thought process we should have as data professionals when we pick a server, a process, or application and even platform software. The key is to include more than just the price of a piece of software or hardware. You need to think about the “other” costs in the decision, and then make the right one. Sometimes the cheapest option is the cheapest, and other times, well, it isn’t. I’ve seen this played out not only in the decision to go with a certain selection, but in the options or editions it comes in. You have to put all of the decision points in the analysis to come up with the right answer, and you have to be able to explain your logic to your team and your company. This is the way you become a data professional, not just a DBA. You can check out the post here – it deals with Azure, but the point is the process, not Azure itself: http://blogs.msdn.com/eugeniop/archive/2010/03/19/windows-azure-guidance-a-simplistic-economic-analysis-of-a-expense-migration.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Virtually the fastest way to try Solaris 11 (and Solaris 10 zones)

    - by dminer
    If you're looking to try out Solaris 11, there are the standard ISO and USB image downloads on the main page.  Those are great if you're looking to install Solaris 11 on hardware, and we hope you will.  But if you take the time to look down the page, you'll find a link off to the Oracle Solaris 11 Virtual Machine downloads.  There are two downloads there:A pre-built Solaris 10 zoneA pre-built Solaris 11 VM for use with VirtualBoxIf you're looking to try Solaris 11 on x86, the second one is what you want.  Of course, this assumes you have VirtualBox already (and if you don't, now's the time to try it, it's a terrific free desktop virtualization product).  Once you complete the 1.8 GB download, it's a simple matter of unzipping the archive and a few quick clicks in VirtualBox to get a Solaris 11 desktop booted.  While it's booting, you'll get to run through the new system configuration tool (that'll be the subject of a future posting here) to configure networking, a user account, and so on.So what about that pre-built Solaris 10 zone download?  It's a really simple way to get yourself acquainted with the Solaris 10 zones feature, which you may well find indispensible in transitioning an existing Solaris 10 infrastructure to Solaris 11.  Once you've downloaded the file, it's a self-extracting executable that'll configure the zone for you, all you have to supply is an IP address for the zone.  It's really quite slick!I expect we'll do a lot more pre-built VM's and zones going forward, as that's a big part of being a cloud OS; if there's one that would be really useful for you, let us know.

    Read the article

  • How broad should a computer science/engineering student go?

    - by AskQuestions
    I have less than 2 years of college left and I still don't know what to focus on. But this is not about me, this is about being a future developer. I realize that questions like "Which language should I learn next?" are not really popular, but I think my question is broader than that. I often see people write things like "You have to learn many different things. Being a developer is not about learning one programming language / technology and then doing that for the rest of your life". Well, sure, but it's impossible to really learn everything thoroughly. Does that mean that one should just learn the basics of everything and then learn some things more thoroughly AFTER getting a particular job? I mean, the best way to learn programming is by actually programming stuff... But projects take time. Does an average developer really switch between (for example) being a web developer, doing artificial intelligence and machine learning related stuff and programming close to the hardware? I mean, I know a lot of different things, but I don't feel proficient in any of those things. If I want to find a job as a web developer (that's just an example) after I finish college, shouldn't I do some web related project (maybe using something I still don't know) rather than try to learn functional programming? So, the question is: How broad should a computer science student's field of focus be? One programming language is surely far too narrow, but what is too broad?

    Read the article

  • MaaS minimum requirements with juju-jitsu?

    - by Christopher Shen Mu Long
    I've browsed through so many different sites and found so much contradictory information. As I am getting tired of this and do belive this question affects many other users, so I would like to collect the "once and for all times" answer. Unfortunately, the documentation on MaaS and Juju is ... well, not the best, sorry to say that. What are the minimum system requirements for setting up a MaaS cluster, which is going to be orchestrated with juju-jitsu? Do they need to have the exact system specifications or can I just combine different hardware? What are the minimum requirements for the master machine? E.g. "You need at least 8GB of RAM, a dual core CPU with at least 3.0 GHz." How many machines to I need to deploy MaaS on? I've read six machines, nine machines, and so on. I clearly want to know: "You need one for the Master and e.g. five nodes." Do I need to attach as many NICs (network interface cards) to my master machine as there are nodes, or can I simply attach two NICs and a switch? One NIC for connecting to the internet, one for handling the MaaS tasks, connected to a switch, which connects my nodes to the master? Is Juju now ready for local deployment? The last time I experimented with juju and had to reboot my machine, the services orchestrated by juju were gone. This was an issue I also found on the official juju site. Unfortunately, as mentioned above, the documentation is not the best, so I could not find the necessary info on that again. So: Can I use juju on a local environment or will a reboot break my setup?

    Read the article

  • Oracle Internet Directory 11gR1 11.1.1.6 Certified with Oracle E-Business Suite

    - by B Shashikumar
    We are very pleased to announce that Oracle Internet Directory 11gR1 (11.1.1.6) is now certified with Oracle E-Business Suite Releases 11i, 12.0 and 12.1. With this certification, we are offering several benefits to Oracle E-Business Suite customers: · Massive Scale: Oracle Internet Directory (OID) is a proven solution for mission critical deployments. OID can scale to extremely large deployments on less hardware as demonstrated by its published Two-Billion-User Benchmark. This reduces the footprint required to deploy enterprise directory services in the data-center resulting in cost savings and a greener enterprise. · Enhanced Security: OID is the most secure directory service that provides security at every level from data in transit to storage and backups. In addition to LDAP security, it leverages powerful Oracle database security features like Database Vault and Transparent Data Encryption · Investment Protection: This certification leverages Identity Management’s hot-pluggable capabilities enabling E-Business Suite customers to store and manage user identities in existing directory servers thus helping them maximize their investments For a complete matrix of platforms supported by Oracle Internet Directory and its components, refer to the Oracle Identity and Access Management 11gR1 certification matrix. For more information about this certification, check out the Oracle E-Business Suite blog. 

    Read the article

  • How to sync Ubuntu/software/configurations between N computers with free software and/or without a cloud?

    - by skanatek
    Note: this question is not about syncing data in a Dropbox-like way (files, folders), it is more about syncing configurations. I would like to have exactly the same version of Ubuntu with all the software installed and configured both on my Desktop PC and on my Laptop PC (and maybe on my small netbook PC) without using Ubuntu Sync and with minimal maintenance effort (setup once, run for a long time). The use case is the following: I work on my Laptop PC and do some changes to software configuration, for example: configure vim to have a new plugin update the Search Tracker / Recoll file search index configure Thunderbird to have an additional IMAP account ('remember password') add some new bookmarks in Firefox/Chrome change the desktop background image install new software with apt-get install build and install new software with checkinstall etc. I do some 'sync' operation I switch to my Desktop PC and get all the changes from (1) working on the Desktop PC I work on my Desktop PC and do some changes to software configuration, for example: add new directory to the list of directories to be backed up by DejaDup add a new check spelling dictionary to the Libreoffice Writer configure the Terminator software to have colored fonts install new font into the Ubuntu system configure Ekiga to make phone calls etc. I do some 'sync' operation I switch to my Laptop PC and get all the changes from (1) and (4) working on the Laptop PC. Question: What free/open-source software can I use to sync both machines' Ubuntu systems, installed software and configurations? Is it possible to do that without any cloud services? Complementary question: It is obvious that the Desktop PC and the Laptop PC have different hardware configurations. How does the 'sync software' in question deal with video drivers, wlan drivers and their configurations? Note: I do not need all the PCs to be synced at the same time, because I work with only one single machine at once. Note: I considered to use Chef to solve the problem, but it seems that it might be really cumbersome to maintain such a setup. Note: I also considered using a bootable USB with Ubuntu installed (portable Linux), but I am not sure that the video drivers will work then.

    Read the article

  • Visual Studio 2010 editor painfully slow

    - by Daniel Gehriger
    I'm running out of patience with MS VisualStudio 2010: I'm working on a solution containing ~50 C++ projects. When using the editor, I experience a lag of 1 - 2 seconds whenever I move the cursor to a different line, or when I move to a different window, or generally when the editor losses and gains focus. I went through a whole series of optimizations, to no avail: installed all hotfixes for VS2010 disabled all add-ins and extensions disabled Intellisense deleted all temporary files created by VS2010 disabled hardware acceleration unloaded all but 15 projects disabled tracking changes closed all but one window and so on. This is on a Dual Core machine with SSD harddrive (verified throughput 100MB/s), enough free space on HD, Windows 7 Pro 32-bit with 3GB of RAM and most of it still free. Whenever I type a letter, CPU usage of devenv.exe goes to 50 - 90% in process monitor for 1 - 2 seconds before returning to 5%. I used Process Explorer to analyze registry and file system access, and I only notice frequent accesses to the .sln file (which is quiet small), and a few registry reads, but nothing that would raise a red flag. I don't have this problem with solutions containing less projects, so I'm inclined to think that it's related to the number of projects. For your information, the entire solution has been migrated over the years from VS2005 to VS2008 to now VS2010. Does anyone have any ideas what else I could do to resume work on this project, other than returning to VS2008?

    Read the article

  • Architectural and Design Challenges with SOA

    With all of the hype about service oriented architecture (SOA) primarily through the use of web services, not much has been said about potential issues of using SOA in the design of an application. I am personally a fan of SOA, but it is not the solution for every application. Proper evaluation should be done on all requirements and use cases prior to deciding to go down the SOA road. It is important to consider how your application/service will handle the following perils as it executes. Example Challenges of SOA Network Connectivity Issues Handling Connectivity Issues Longer Processing/Transaction Times How many of us have had issues visiting our favorite web sites from time to time? The same issue will occur when using service based architecture especially if it is implemented using web services. Forcing applications to access services via a network connection introduces a lot of new failure points to the application. Potential failure points include: DNS issues, network hardware issues, remote server issues, and the lack of physical network connections. When network connectivity issues do occur, how are the service clients are implemented is very important. Should the client wait and poll the service until it is accessible again? If so what is the maximum wait time or number of attempts it should retry. Due to the fact of services being distributed across a network automatically increase the responsiveness of client applications due to the fact that processing time must now also include time to send and receive messages from called services. This could add nanoseconds to minutes per each request based on network load and server usage of the service provider. If speed highly desirable quality attribute then I would consider creating components that are hosted where the client application is located. References: Rader, Dave. (2002). Overcoming Web Services Challenges with Smart Design: http://soa.sys-con.com/node/39458

    Read the article

  • Engineered to Inform, Inspire, Entertain

    - by Oracle OpenWorld Blog Team
    by Karen Shamban Take note! Oracle OpenWorld keynote lineup announced  The lineup for the keynotes at this year's Oracle OpenWorld conference has just been announced.  Expert speakers will provide insights into industry trends, the latest technology developments and futures, as well as key strategies for achieving business efficiency and innovation. Critical business drivers such as engineered systems, cloud computing, customer experience, and business analytics and big data will be featured topics. Executive keynotes include: Oracle CEO Larry Ellison on "Hardware and Software, Engineered to Work Together: Why It's a Different Approach" and "The Oracle Cloud: Where Social is Built In" Oracle President Mark Hurd discussing "Shift Complexity" with SVP of Oracle Database Development Andrew Mendelsohn,  and "See More, Act Faster: Oracle Business Analytics" Oracle EVP of Product Development Thomas Kurian focusing on "The Oracle Cloud: Oracle's Cloud Platform and Applications Strategy" Oracle EVP of Systems John Fowler, Oracle Chief Corporate Architect Edward Screven, and Oracle SVP of Systems Technology Juan Loiaza on "Oracle Cloud Infrastructure and Engineered Systems: Fast, Reliable, Virtualized" For more information on speakers, topics, and schedule, go to the Oracle OpenWorld Keynotes page.

    Read the article

  • SQLAuthority News – Download Whitepaper – Power View Infrastructure Configuration and Installation: Step-by-Step and Scripts

    - by pinaldave
    Power View, a feature of SQL Server 2012 Reporting Services Add-in for Microsoft SharePoint Server 2010 Enterprise Edition, is an interactive data exploration, visualization, and presentation experience. It provides intuitive ad-hoc reporting for business users such as data analysts, business decision makers, and information workers. Microsoft has recently released very interesting whitepaper which covers a sample scenario that validates the connectivity of the Power View reports to both PowerPivot workbooks and tabular models. This white paper talks about following important concepts about Power View: Understanding the hardware and software requirements and their download locations Installing and configuring the required infrastructure when Power View and its data models are on the same computer and on different computer Installing and configuring a computer used for client access to Power View reports, models, Sharepoint 2012 and Power View in a workgroup Configuring single sign-on access for double-hop scenarios with and without Kerberos You can download the whitepaper from here. This whitepaper talks about many interesting scenarios. It would be really interesting to know if you are using Power View in your production environment. If yes, would you please share your experience over here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, PostADay, SQL, SQL Authority, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQL White Papers, T SQL, Technology

    Read the article

  • Be There: Tinkerforge/NetBeans Platform Integration Course

    - by Geertjan
    Tinkerforge is an electronic construction kit. It exposes a number of API bindings, including, of course, Java. The nice thing also is that Tinkerforge products are open source, both on the hardware and software levels, so that you can take their bases as a starting point for your own modifications. "The TinkerForge system is a set of pre-built electronics boards that are built in such a way that you can stack the boards (known as bricks), attach accessories (known as bricklets), and have your prototype and and running quickly. Unlike systems, such as the Arduino or Launchpad, the TinkerForge has to be attached to a computer and the computer does all of the work. With an easy set of application programming interfaces (APIs) available in C/C++, C#, Java, PHP, and Ruby, the system is easy to interface and program over USB in a snap." (from this useful article) Henning Krüp, who has arranged several NetBeans Platform Certified Training Courses in the past, in the Nordhorn/Lingen area in Germany, had the inspired idea to focus the next course on integration with Tinkerforge. In other words, the whole course will be focused on creating a standalone Java desktop application that leverages the NetBeans Platform to interact with Tinkerforge! Interested in joining the course or setting up something similar yourself? The course organized by Henning will be held from 19 to 21 September, as explained here, together with contact details.  If you'd like to organize a similar course at a location of your choosing, leave a comment at the end of this blog entry and we'll set something up together!

    Read the article

< Previous Page | 209 210 211 212 213 214 215 216 217 218 219 220  | Next Page >