Search Results

Search found 10388 results on 416 pages for 'hardware selection'.

Page 130/416 | < Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >

  • Enable Media Volume Slider in Android application.

    - by user300972
    Some Android programs trigger the "media volume" slider when the hardware volume up/down buttons are pressed. My application seems to set the ringer volume when the hardware buttons are pressed. How would I enable the media volume slider? I would hate for users to have to go into their settings to change the media volume when they use my application.

    Read the article

  • Flex list-controls - maintain remote data

    - by artemb
    Hello. I have a TileList which represents some remote data. I also have a form which allows me to change the data. And the data may be changed by someone else too. What is the best way to maintain data in the list in an up-to-date state? The simplest option I see is the following: Select an item in the list Edit it in the form Save it. The form submits the data to the server When the server reports success the list re-fetches it's data The very bad thing about this workflow is that the list loses selection (a tree would also loose the nodes' expanded/collapsed state). I would really love to find out another option which would enable the list to maintain it's selection state. Any guesses on how it may be done?

    Read the article

  • looping thru a list of checkboxes and saving the values, not working properly

    - by Erez
    Hello all, I'm having a bit of a problem with this code. The program gives me a list of checkboxes but a user ID. then u user can change his selection and push the save button (id="btnSaveUserIntersts") and i am trying to save in the hidden textbox all the values of the checkboxes that was choosen. The problem is that i am getting all the time the same selections that came form the database and not getting the new selection that the user made. Can any one tell me what am i doing wrong here? $(document).ready( function() { $('#btnSaveUserIntersts').bind( 'click', function() { var strCheckBoxChecked = new String(); $('input[type=checkbox][checked]').each( function() { strCheckBoxChecked += $(this).val(); strCheckBoxChecked += ','; } ); $('#hidUserInterests').val(strCheckBoxChecked); } ); } ); 10x

    Read the article

  • UITableView with multiple selections

    - by NewDev
    I have a UITableView with multiple selections enabled. If I select items and scroll the list back and forth they are remembered and shown and selected (blue background). Using the didDeselectRowAtIndexPath and didSelectRowAtIndexPath I am able to keep my own array of selected items. That part works well. However, if I then use the sectionForSectionIndexTitle and jump to a letter, the selection appears to be forgotten - even indexPathForSelectedRows appears to have been reset and is now empty. My own array remembers that an item is selected and I can set the cell.selected in the cellForRowAtIndexPath but the instant I move the list it is forgotten again. Any ideas? Is this a bug, or how do you retain the selection list when jumping to a letter?

    Read the article

  • visual studio C++ toggle comment ? comment while not whole line is selected ?

    - by Mr_and_Mrs_D
    2 questions actually : 1) shortcut to toggle comment on selected lines ? Available on all iDEs I used starting with notepad++ 2)the ctrl-k, ctrl-c exhibits this behavior (quoted from someplace nicely worded): C#: Each line where some text is selected is commented at the line-start with double-slash. If nothing is selected, the line where the cursor is is commented. C++: If nothing is selected or complete lines are selected, it behaves as above. However, if parts of a line are selected, and no comment is selected as part of the selection (ex. select something in the middle of a code line), then the selection is surrounded by /* and */. since I code in C++ I find this behavior annoying - I want to be able to comment out lines partially selected - any workarounds ?

    Read the article

  • jQuery.get() - Practical uses?

    - by RedWolves
    I am trying to understand why you would use jQuery.get() and jQuery.get(index). The docs say it is to convert the jQuery selection to the raw DOM object instead of working with the selection as a jQuery object and the methods available to it. So a quick example: $("div").get(0).innerHTML; is the same as: $("div").html(); Obviously this is a bad example but I am struggling to figure when you would use .get(). Can you help me understand when I would use this method in my code?

    Read the article

  • Append before parent after getSelection()

    - by sanceray3
    Hi all, I have one question. I try to develop a little WYSIWYG editor. I would like create a <h1> button which allows me to generate a <h1> title (by clicking on button after text selection in <p> element). I obtain the selection with window.getSelection().... But now, I would like to put my append("<h1>my text selected</h1>") just before the <p>. It’s my problem, because my <p> elements don’t have id or class. So do you know a means to put my append just before the <p> where my text has been selected? I don’t know if with my bad English you’ll understand me! Thanks very much for your help.

    Read the article

  • Displaying individual elements of an object in an Arraylist through a for loop?

    - by user1180888
    I'm trying to Display individual elements of an Object I have created. It is a simple Java program that allows users to add and keep track of Player Details. I'm just stumped when it comes to displaying the details after they have been added already. here is what my code looks like I can create the object and input it into the arraylist no problem using the case 2, but when I try to print it out I want to do something like System.out.println("Player Name" + myPlayersArrayList.PlayerName + "Player Position" + myPlayerArrayList.PlayerPosition + "Player Age" + "Player Age"); I know that is not correct, but I dont really know what to do, if anyone can be of any help it would be greatly appreciated. Thanks System.out.println("Welcome to the Football Player database"); System.out.print(System.getProperty("line.separator")); UserInput myFirstUserInput = new UserInput(); int selection; ArrayList<Player> myPlayersArrayList = new ArrayList<Player>(); while (true) { System.out.println("1. View The Players"); System.out.println("2. Add A Player"); System.out.println("3. Edit A Player"); System.out.println("4. Delete A Player"); System.out.println("5. Exit ") ; System.out.print(System.getProperty("line.separator")); selection = myFirstUserInput.getInt("Please select an option"); System.out.print(System.getProperty("line.separator")); switch(selection){ case 1: if (myPlayersArrayList.isEmpty()) { System.out.println("No Players Have Been Entered Yet"); System.out.print(System.getProperty("line.separator")); break;} else {for(int i = 0; i < myPlayersArrayList.size(); i++){ System.out.println(myPlayersArrayList); } break; case 2: { String playerName,playerPos; int playerAge; playerName = (myFirstUserInput.getString("Enter Player name")); playerPos = (myFirstUserInput.getString("Enter Player Position")); playerAge = (myFirstUserInput.getInt("Enter Player Age")); myPlayersArrayList.add(new Player(playerName, playerPos, playerAge)); ; break; }

    Read the article

  • what is the best mid/high-end class audio/music creation audio sound card?

    - by Chris
    Hello, I have a computershop myself, and I repair computers. But one of the things I really don't know (yet) is the performace od audio cards for music creation with midi. I have searched and searched and came up with some good reviews, but after browsing for a couple of hours I could't see the trees trough the forrest :-D (it's a dutch expression) At one moment I thought the M-Audio - Delta 1010LT would be a good PCIe card, later on I read that this card was released years ago. (but that could be false information) Also any personal expierence would be great, but not necessairy. I have searched a few cards, and I hope someone can help me make a choice for a friend of mine. He's buget is between $100 and $350 I know there are audio cards from $ 500 - $1850,- this is just too expensive. The following specs are crucial: ASIO Midi Mic in minimal 5.1, 7.1 recommended it's not for airplay, but just to compose music at home. using Ableton and midi keyboard. 1. M-Audio - Delta 1010LT: 8 x 8 analog I/O 2 mic preamps or line inputs S/PDIF digital I/O (coaxial) with 2-channel PCM SCMS copy protection control digital I/O supports surround-encoded AC-3 and DTS pass-through 1 x 1 MIDI I/O directly drive up to 7.1 surround (bass management software included) software controlled 36-bit internal DSP digital mixing/routing +4dbu/-10dBV operation individually switched in software word clock I/O for sample accurate device synchronization 2. RME HDSP 9632: * Stereo Analog Ein- und Ausgang, symmetrisch*, 24-Bit/192kHz, > 110 dB SNR * Optionale Erweiterungsboards mit je 4 symmetrischen Ein- und Ausgängen * Alle analogen I/Os voll 192 kHz-fähig, also keine Reduzierung der Kanalzahl * 1 x ADAT Digital In/Out, 96 kHz-fähig (S/MUX) * 1 x SPDIF Digital In/Out, 192 kHz-fähig * 1 x Breakout Kabel für koaxialen SPDIF-Betrieb* * Also bis zu 16 Ein-und Ausgänge gleichzeitig nutzbar! * 1 x Stereo Kopfhörerausgang, parallel zum analogen Ausgang, aber eigene Pegelanpassung * 1 x MIDI I/O für 16 Kanäle Hi-Speed MIDI über Breakout Kabel * DIGICheck, RMEs einzigartiges Meter- und Analysetool mit Spectral Analyser, Professionelle Level Meter 2/8/16-Kanalig, Vector Audio Scope und diversen weiteren Analysefunktionen * HDSP Meter Bridge: Frei skalierbare Levelmeter mit Peak- und RMS Berechnung in Hardware * TotalMix: 512-Kanal Mischer mit 40 Bit interner Auflösung 3. EMU 1212M (1212 M) PCIe: * Top kwaliteit convertors 24-bit/192kHz convertors. * Hardware gestuurde effecten. * DSP zero-latency hardware mixen en monitoring. * Analoge en digitale I/O plus MIDI. * EMU Production Tools Software Bundle - Cakewalk SONAR , Steinberg Cubase LE, Ableton Live E-MU Edition **EMU 1212M PCI-e inputs/outputs:** * 2 balanced jack inputs. * 2 balanced jack outputs. * 24-bit/192kHz ADAT I/O. * 24-bit/192kHz Coaxiale S/PDif I/O switchable to AES/EBU. * MIDI I/O. 4. M-Audio Audiophile 192: - Up to 24-bit/192kHz audio - 2 balanced analog inputs (1/4” TRS) - 2 balanced analog outputs (1/4” TRS) - S/PDIF digital I/O (coaxial RCA connectors) with 2-channel PCM - SCMS copy protection control - Digital I/O supports surround-encoded AC-3 and DTS pass-through - Direct hardware input monitoring via separate balanced 1/4” TRS monitor outputs - Software routing of inputs and outputs - Digital I/O can be routed to/from external effects - 16-channel MIDI I/O - ASIO, WDM, GSIF 2 and Core Audio driver support for compatibility with most applications - 64-bit driver support for Windows - PCI 2.2 compatibility - Apple G5 compatible - Incompatible exceptions - Includes Ableton Live Lite music production software, so you can make music right away - Works with other Delta cards Technical Specifcations: - Compatibility - ASIO - WDM - GSIF 2 - Core Audio

    Read the article

  • What *exactly* gets screwed when I kill -9 or pull the power?

    - by Mike
    Set-Up I've been a programmer for quite some time now but I'm still a bit fuzzy on deep, internal stuff. Now. I am well aware that it's not a good idea to either: kill -9 a process (bad) spontaneously pull the power plug on a running computer or server (worse) However, sometimes you just plain have to. Sometimes a process just won't respond no matter what you do, and sometimes a computer just won't respond, no matter what you do. Let's assume a system running Apache 2, MySQL 5, PHP 5, and Python 2.6.5 through mod_wsgi. Note: I'm most interested about Mac OS X here, but an answer that pertains to any UNIX system would help me out. My Concern Each time I have to do either one of these, especially the second, I'm very worried for a period of time that something has been broken. Some file somewhere could be corrupt -- who knows which file? There are over 1,000,000 files on the computer. I'm often using OS X, so I'll run a "Verify Disk" operation through the Disk Utility. It will report no problems, but I'm still concerned about this. What if some configuration file somewhere got screwed up. Or even worse, what if a binary file somewhere is corrupt. Or a script file somewhere is corrupt now. What if some hardware is damaged? What if I don't find out about it until next month, in a critical scenario, when the corruption or damage causes a catastrophe? Or, what if valuable data is already lost? My Hope My hope is that these concerns and worries are unfounded. After all, after doing this many times before, nothing truly bad has happened yet. The worst is I've had to repair some MySQL tables, but I don't seem to have lost any data. But, if my worries are not unfounded, and real damage could happen in either situation 1 or 2, then my hope is that there is a way to detect it and prevent against it. My Question(s) Could this be because modern operating systems are designed to ensure that nothing is lost in these scenarios? Could this be because modern software is designed to ensure that nothing lost? What about modern hardware design? What measures are in place when you pull the power plug? My question is, for both of these scenarios, what exactly can go wrong, and what steps should be taken to fix it? I'm under the impression that one thing that can go wrong is some programs might not have flushed their data to the disk, so any highly recent data that was supposed to be written to the disk (say, a few seconds before the power pull) might be lost. But what about beyond that? And can this very issue of 5-second data loss screw up a system? What about corruption of random files hiding somewhere in the huge forest of files on my hard drives? What about hardware damage? What Would Help Me Most Detailed descriptions about what goes on internally when you either kill -9 a process or pull the power on the whole system. (it seems instant, but can someone slow it down for me?) Explanations of all things that could go wrong in these scenarios, along with (rough of course) probabilities (i.e., this is very unlikely, but this is likely)... Descriptions of measures in place in modern hardware, operating systems, and software, to prevent damage or corruption when these scenarios occur. (to comfort me) Instructions for what to do after a kill -9 or a power pull, beyond "verifying the disk", in order to truly make sure nothing is corrupt or damaged somewhere on the drive. Measures that can be taken to fortify a computer setup so that if something has to be killed or the power has to be pulled, any potential damage is mitigated. Thanks so much!

    Read the article

  • Distributed and/or Parallel SSIS processing

    - by Jeff
    Background: Our company hosts SaaS DSS applications, where clients provide us data Daily and/or Weekly, which we process & merge into their existing database. During business hours, load in the servers are pretty minimal as it's mostly users running simple pre-defined queries via the website, or running drill-through reports that mostly hit the SSAS OLAP cube. I manage the IT Operations Team, and so far this has presented an interesting "scaling" issue for us. For our daily-refreshed clients, the server is only "busy" for about 4-6 hrs at night. For our weekly-refresh clients, the server is only "busy" for maybe 8-10 hrs per week! We've done our best to use some simple methods of distributing the load by spreading the daily clients evenly among the servers such that we're not trying to process daily clients back-to-back over night. But long-term this scaling strategy creates two notable issues. First, it's going to consume a pretty immense amount of hardware that sits idle for large periods of time. Second, it takes significant Production Support over-head to basically "schedule" the ETL such that they don't over-lap, and move clients/schedules around if they out-grow the resources on a particular server or allocated time-slot. As the title would imply, one option we've tried is running multiple SSIS packages in parallel, but in most cases this has yielded VERY inconsistent results. The most common failures are DTExec, SQL, and SSAS fighting for physical memory and throwing out-of-memory errors, and ETLs running 3,4,5x longer than expected. So from my practical experience thus far, it seems like running multiple ETL packages on the same hardware isn't a good idea, but I can't be the first person that doesn't want to scale multiple ETLs around manual scheduling, and sequential processing. One option we've considered is virtualizing the servers, which obviously doesn't give you any additional resources, but moves the resource contention onto the hypervisor, which (from my experience) seems to manage simultaneous CPU/RAM/Disk I/O a little more gracefully than letting DTExec, SQL, and SSAS battle it out within Windows. Question to the forum: So my question to the forum is, are we missing something obvious here? Are there tools out there that can help manage running multiple SSIS packages on the same hardware? Would it be more "efficient" in terms of parallel execution if instead of running DTExec, SQL, and SSAS same machine (with every machine running that configuration), we run in pairs of three machines with SSIS running on one machine, SQL on another, and SSAS on a third? Obviously that would only make sense if we could process more than the three ETL we were able to process on the machine independently. Another option we've considered is completely re-architecting our SSIS package to have one "master" package for all clients that attempts to intelligently chose a server based off how "busy" it already is in terms of CPU/Memory/Disk utilization, but that would be a herculean effort, and seems like we're trying to reinvent something that you would think someone would sell (although I haven't had any luck finding it). So in summary, are we missing an obvious solution for this, and does anyone know if any tools (for free or for purchase, doesn't matter) that facilitate running multiple SSIS ETL packages in parallel and on multiple servers? (What I would call a "queue & node based" system, but that's not an official term). Ultimately VMWare's Distributed Resource Scheduler addresses this as you simply run a consistent number of clients per VM that you know will never conflict scheduleing-wise, then leave it up to VMWare to move the VMs around to balance out hardware usage. I'm definitely not against using VMWare to do this, but since we're a 100% Microsoft app stack, it seems like -someone- out there would have solved this problem at the application layer instead of the hypervisor layer by checking on resource utilization at the OS, SQL, SSAS levels. I'm open to ANY discussion on this, and remember no suggestion is too crazy or radical! :-) Right now, VMWare is the only option we've found to get away from "manually" balancing our resources, so any suggestions that leave us on a pure Microsoft stack would be great. Thanks guys, Jeff

    Read the article

  • Displaying an image on a LED matrix with a Netduino

    - by Bertrand Le Roy
    In the previous post, we’ve been flipping bits manually on three ports of the Netduino to simulate the data, clock and latch pins that a shift register expected. We did all that in order to control one line of a LED matrix and create a simple Knight Rider effect. It was rightly pointed out in the comments that the Netduino has built-in knowledge of the sort of serial protocol that this shift register understands through a feature called SPI. That will of course make our code a whole lot simpler, but it will also make it a whole lot faster: writing to the Netduino ports is actually not that fast, whereas SPI is very, very fast. Unfortunately, the Netduino documentation for SPI is severely lacking. Instead, we’ve been reliably using the documentation for the Fez, another .NET microcontroller. To send data through SPI, we’ll just need  to move a few wires around and update the code. SPI uses pin D11 for writing, pin D12 for reading (which we won’t do) and pin D13 for the clock. The latch pin is a parameter that can be set by the user. This is very close to the wiring we had before (data on D11, clock on D12 and latch on D13). We just have to move the latch from D13 to D10, and the clock from D12 to D13. The code that controls the shift register has slimmed down considerably with that change. Here is the new version, which I invite you to compare with what we had before: public class ShiftRegister74HC595 { protected SPI Spi; public ShiftRegister74HC595(Cpu.Pin latchPin) : this(latchPin, SPI.SPI_module.SPI1) { } public ShiftRegister74HC595(Cpu.Pin latchPin, SPI.SPI_module spiModule) { var spiConfig = new SPI.Configuration( SPI_mod: spiModule, ChipSelect_Port: latchPin, ChipSelect_ActiveState: false, ChipSelect_SetupTime: 0, ChipSelect_HoldTime: 0, Clock_IdleState: false, Clock_Edge: true, Clock_RateKHz: 1000 ); Spi = new SPI(spiConfig); } public void Write(byte buffer) { Spi.Write(new[] {buffer}); } } All we have to do here is configure SPI. The write method couldn’t be any simpler. Everything is now handled in hardware by the Netduino. We set the frequency to 1MHz, which is largely sufficient for what we’ll be doing, but it could potentially go much higher. The shift register addresses the columns of the matrix. The rows are directly wired to ports D0 to D7 of the Netduino. The code writes to only one of those eight lines at a time, which will make it fast enough. The way an image is displayed is that we light the lines one after the other so fast that persistence of vision will give the illusion of a stable image: foreach (var bitmap in matrix.MatrixBitmap) { matrix.OnRow(row, bitmap, true); matrix.OnRow(row, bitmap, false); row++; } Now there is a twist here: we need to run this code as fast as possible in order to display the image with as little flicker as possible, but we’ll eventually have other things to do. In other words, we need the code driving the display to run in the background, except when we want to change what’s being displayed. Fortunately, the .NET Micro Framework supports multithreading. In our implementation, we’ve added an Initialize method that spins a new thread that is tied to the specific instance of the matrix it’s being called on. public LedMatrix Initialize() { DisplayThread = new Thread(() => DoDisplay(this)); DisplayThread.Start(); return this; } I quite like this way to spin a thread. As you may know, there is another, built-in way to contextualize a thread by passing an object into the Start method. For the method to work, the thread must have been constructed with a ParameterizedThreadStart delegate, which takes one parameter of type object. I like to use object as little as possible, so instead I’m constructing a closure with a Lambda, currying it with the current instance. This way, everything remains strongly-typed and there’s no casting to do. Note that this method would extend perfectly to several parameters. Of note as well is the return value of Initialize, a common technique to add some fluency to the API and enabling the matrix to be instantiated and initialized in a single line: using (var matrix = new LedMS88SR74HC595().Initialize()) The “using” in the previous line is because we have implemented IDisposable so that the matrix kills the thread and clears the display when the user code is done with it: public void Dispose() { Clear(); DisplayThread.Abort(); } Thanks to the multi-threaded version of the matrix driver class, we can treat the display as a simple bitmap with a very synchronous programming model: matrix.Set(someimage); while (button.Read()) { Thread.Sleep(10); } Here, the call into Set returns immediately and from the moment the bitmap is set, the background display thread will constantly continue refreshing no matter what happens in the main thread. That enables us to wait or read a button’s port on the main thread knowing that the current image will continue displaying unperturbed and without requiring manual refreshing. We’ve effectively hidden the implementation of the display behind a convenient, synchronous-looking API. Pretty neat, eh? Before I wrap up this post, I want to talk about one small caveat of using SPI rather than driving the shift register directly: when we got to the point where we could actually display images, we noticed that they were a mirror image of what we were sending in. Oh noes! Well, the reason for it is that SPI is sending the bits in a big-endian fashion, in other words backwards. Now sure you could fix that in software by writing some bit-level code to reverse the bits we’re sending in, but there is a far more efficient solution than that. We are doing hardware here, so we can simply reverse the order in which the outputs of the shift register are connected to the columns of the matrix. That’s switching 8 wires around once, as compared to doing bit operations every time we send a line to display. All right, so bringing it all together, here is the code we need to write to display two images in succession, separated by a press on the board’s button: var button = new InputPort(Pins.ONBOARD_SW1, false, Port.ResistorMode.Disabled); using (var matrix = new LedMS88SR74HC595().Initialize()) { // Oh, prototype is so sad! var sad = new byte[] { 0x66, 0x24, 0x00, 0x18, 0x00, 0x3C, 0x42, 0x81 }; DisplayAndWait(sad, matrix, button); // Let's make it smile! var smile = new byte[] { 0x42, 0x18, 0x18, 0x81, 0x7E, 0x3C, 0x18, 0x00 }; DisplayAndWait(smile, matrix, button); } And here is a video of the prototype running: The prototype in action I’ve added an artificial delay between the display of each row of the matrix to clearly show what’s otherwise happening very fast. This way, you can clearly see each of the two images being displayed line by line. Next time, we’ll do no hardware changes, focusing instead on building a nice programming model for the matrix, with sprites, text and hardware scrolling. Fun stuff. By the way, can any of my reader guess where we’re going with all that? The code for this prototype can be downloaded here: http://weblogs.asp.net/blogs/bleroy/Samples/NetduinoLedMatrixDriver.zip

    Read the article

  • Introducing Oracle VM Server for SPARC

    - by Honglin Su
    As you are watching Oracle's Virtualization Strategy Webcast and exploring the great virtualization offerings of Oracle VM product line, I'd like to introduce Oracle VM Server for SPARC --  highly efficient, enterprise-class virtualization solution for Sun SPARC Enterprise Systems with Chip Multithreading (CMT) technology. Oracle VM Server for SPARC, previously called Sun Logical Domains, leverages the built-in SPARC hypervisor to subdivide supported platforms' resources (CPUs, memory, network, and storage) by creating partitions called logical (or virtual) domains. Each logical domain can run an independent operating system. Oracle VM Server for SPARC provides the flexibility to deploy multiple Oracle Solaris operating systems simultaneously on a single platform. Oracle VM Server also allows you to create up to 128 virtual servers on one system to take advantage of the massive thread scale offered by the CMT architecture. Oracle VM Server for SPARC integrates both the industry-leading CMT capability of the UltraSPARC T1, T2 and T2 Plus processors and the Oracle Solaris operating system. This combination helps to increase flexibility, isolate workload processing, and improve the potential for maximum server utilization. Oracle VM Server for SPARC delivers the following: Leading Price/Performance - The low-overhead architecture provides scalable performance under increasing workloads without additional license cost. This enables you to meet the most aggressive price/performance requirement Advanced RAS - Each logical domain is an entirely independent virtual machine with its own OS. It supports virtual disk mutipathing and failover as well as faster network failover with link-based IP multipathing (IPMP) support. Moreover, it's fully integrated with Solaris FMA (Fault Management Architecture), which enables predictive self healing. CPU Dynamic Resource Management (DRM) - Enable your resource management policy and domain workload to trigger the automatic addition and removal of CPUs. This ability helps you to better align with your IT and business priorities. Enhanced Domain Migrations - Perform domain migrations interactively and non-interactively to bring more flexibility to the management of your virtualized environment. Improve active domain migration performance by compressing memory transfers and taking advantage of cryptographic acceleration hardware. These methods provide faster migration for load balancing, power saving, and planned maintenance. Dynamic Crypto Control - Dynamically add and remove cryptographic units (aka MAU) to and from active domains. Also, migrate active domains that have cryptographic units. Physical-to-virtual (P2V) Conversion - Quickly convert an existing SPARC server running the Oracle Solaris 8, 9 or 10 OS into a virtualized Oracle Solaris 10 image. Use this image to facilitate OS migration into the virtualized environment. Virtual I/O Dynamic Reconfiguration (DR) - Add and remove virtual I/O services and devices without needing to reboot the system. CPU Power Management - Implement power saving by disabling each core on a Sun UltraSPARC T2 or T2 Plus processor that has all of its CPU threads idle. Advanced Network Configuration - Configure the following network features to obtain more flexible network configurations, higher performance, and scalability: Jumbo frames, VLANs, virtual switches for link aggregations, and network interface unit (NIU) hybrid I/O. Official Certification Based On Real-World Testing - Use Oracle VM Server for SPARC with the most sophisticated enterprise workloads under real-world conditions, including Oracle Real Application Clusters (RAC). Affordable, Full-Stack Enterprise Class Support - Obtain worldwide support from Oracle for the entire virtualization environment and workloads together. The support covers hardware, firmware, OS, virtualization, and the software stack. SPARC Server Virtualization Oracle offers a full portfolio of virtualization solutions to address your needs. SPARC is the leading platform to have the hard partitioning capability that provides the physical isolation needed to run independent operating systems. Many customers have already used Oracle Solaris Containers for application isolation. Oracle VM Server for SPARC provides another important feature with OS isolation. This gives you the flexibility to deploy multiple operating systems simultaneously on a single Sun SPARC T-Series server with finer granularity for computing resources.  For SPARC CMT processors, the natural level of granularity is an execution thread, not a time-sliced microsecond of execution resources. Each CPU thread can be treated as an independent virtual processor. The scheduler is naturally built into the CPU for lower overhead and higher performance. Your organizations can couple Oracle Solaris Containers and Oracle VM Server for SPARC with the breakthrough space and energy savings afforded by Sun SPARC Enterprise systems with CMT technology to deliver a more agile, responsive, and low-cost environment. Management with Oracle Enterprise Manager Ops Center The Oracle Enterprise Manager Ops Center Virtualization Management Pack provides full lifecycle management of virtual guests, including Oracle VM Server for SPARC and Oracle Solaris Containers. It helps you streamline operations and reduce downtime. Together, the Virtualization Management Pack and the Ops Center Provisioning and Patch Automation Pack provide an end-to-end management solution for physical and virtual systems through a single web-based console. This solution automates the lifecycle management of physical and virtual systems and is the most effective systems management solution for Oracle's Sun infrastructure. Ease of Deployment with Configuration Assistant The Oracle VM Server for SPARC Configuration Assistant can help you easily create logical domains. After gathering the configuration data, the Configuration Assistant determines the best way to create a deployment to suit your requirements. The Configuration Assistant is available as both a graphical user interface (GUI) and terminal-based tool. Oracle Solaris Cluster HA Support The Oracle Solaris Cluster HA for Oracle VM Server for SPARC data service provides a mechanism for orderly startup and shutdown, fault monitoring and automatic failover of the Oracle VM Server guest domain service. In addition, applications that run on a logical domain, as well as its resources and dependencies can be controlled and managed independently. These are managed as if they were running in a classical Solaris Cluster hardware node. Supported Systems Oracle VM Server for SPARC is supported on all Sun SPARC Enterprise Systems with CMT technology. UltraSPARC T2 Plus Systems ·   Sun SPARC Enterprise T5140 Server ·   Sun SPARC Enterprise T5240 Server ·   Sun SPARC Enterprise T5440 Server ·   Sun Netra T5440 Server ·   Sun Blade T6340 Server Module ·   Sun Netra T6340 Server Module UltraSPARC T2 Systems ·   Sun SPARC Enterprise T5120 Server ·   Sun SPARC Enterprise T5220 Server ·   Sun Netra T5220 Server ·   Sun Blade T6320 Server Module ·   Sun Netra CP3260 ATCA Blade Server Note that UltraSPARC T1 systems are supported on earlier versions of the software.Sun SPARC Enterprise Systems with CMT technology come with the right to use (RTU) of Oracle VM Server, and the software is pre-installed. If you have the systems under warranty or with support, you can download the software and system firmware as well as their updates. Oracle Premier Support for Systems provides fully-integrated support for your server hardware, firmware, OS, and virtualization software. Visit oracle.com/support for information about Oracle's support offerings for Sun systems. For more information about Oracle's virtualization offerings, visit oracle.com/virtualization.

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • Adidas by Jeremy Scott is the famous ‘wings’

    - by WoolrichParka
    The higher on this design activities the United states banner layover all over the shoes with red lines on one remaining feet and Adidas by Jeremy Scott the famous 5 celebrity agreement in white-colored and red on the other feet.Out of the many couples of JS Pizza 2.0 silhouettes we’ve gotten a look at, one of the more latest and awesome couples features a glow-in-the-dark higher.One of those Adidas Wing Shoes comes with opera create and bone fragments along the shoelaces place.Expect these to fall this fall, along with more from this years’ selection.wufengfengmaple36

    Read the article

  • How to use Hybrid Graphic Switch on Sony Vaio Z?

    - by Travis R
    I got it to install nicely and it's all working, but I don't know which graphics card is being used nor how to switch between. I tried installing the official Nvidia drivers, but then I could not boot up my computer afterwards so I have not installed them again after doing a reinstall of Ubuntu. PS, if you have a grub install failure during install, the key is to tell it where to install the bootloader at the very beginning of the installation, on your partition selection screen (choose dev/mapper, not the /dev/sda it defaults to).

    Read the article

  • The Incremental Architect&acute;s Napkin - #1 - It&acute;s about the money, stupid

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/05/24/the-incremental-architectacutes-napkin---1---itacutes-about-the.aspx Software development is an economic endeavor. A customer is only willing to pay for value. What makes a software valuable is required to become a trait of the software. We as software developers thus need to understand and then find a way to implement requirements. Whether or in how far a customer really can know beforehand what´s going to be valuable for him/her in the end is a topic of constant debate. Some aspects of the requirements might be less foggy than others. Sometimes the customer does not know what he/she wants. Sometimes he/she´s certain to want something - but then is not happy when that´s delivered. Nevertheless requirements exist. And developers will only be paid if they deliver value. So we better focus on doing that. Although is might sound trivial I think it´s important to state the corollary: We need to be able to trace anything we do as developers back to some requirement. You decide to use Go as the implementation language? Well, what´s the customer´s requirement this decision is linked to? You decide to use WPF as the GUI technology? What´s the customer´s requirement? You decide in favor of a layered architecture? What´s the customer´s requirement? You decide to put code in three classes instead of just one? What´s the customer´s requirement behind that? You decide to use MongoDB over MySql? What´s the customer´s requirement behind that? etc. I´m not saying any of these decisions are wrong. I´m just saying whatever you decide be clear about the requirement that´s driving your decision. You have to be able to answer the question: Why do you think will X deliver more value to the customer than the alternatives? Customers are not interested in romantic ideals of hard working, good willing, quality focused craftsmen. They don´t care how and why you work - as long as what you deliver fulfills their needs. They want to trust you to recognize this as your top priority - and then deliver. That´s all. Fundamental aspects of requirements If you´re like me you´re probably not used to such scrutinization. You want to be trusted as a professional developer - and decide quite a few things following your gut feeling. Or by relying on “established practices”. That´s ok in general and most of the time - but still… I think we should be more conscious about our decisions. Which would make us more responsible, even more professional. But without further guidance it´s hard to reason about many of the myriad decisions we´ve to make over the course of a software project. What I found helpful in this situation is structuring requirements into fundamental aspects. Instead of one large heap of requirements then there are smaller blobs. With them it´s easier to check if a decisions falls in their scope. Sure, every project has it´s very own requirements. But all of them belong to just three different major categories, I think. Any requirement either pertains to functionality, non-functional aspects or sustainability. For short I call those aspects: Functionality, because such requirements describe which transformations a software should offer. For example: A calculator software should be able to add and multiply real numbers. An auction website should enable you to set up an auction anytime or to find auctions to bid for. Quality, because such requirements describe how functionality is supposed to work, e.g. fast or secure. For example: A calculator should be able to calculate the sinus of a value much faster than you could in your head. An auction website should accept bids from millions of users. Security of Investment, because functionality and quality need not just be delivered in any way. It´s important to the customer to get them quickly - and not only today but over the course of several years. This aspect introduces time into the “requrements equation”. Security of Investments (SoI) sure is a non-functional requirement. But I think it´s important to not subsume it under the Quality (Q) aspect. That´s because SoI has quite special properties. For one, SoI for software means something completely different from what it means for hardware. If you buy hardware (a car, a hair blower) you find that a worthwhile investment, if the hardware does not change it´s functionality or quality over time. A car still running smoothly with hardly any rust spots after 10 years of daily usage would be a very secure investment. So for hardware (or material products, if you like) “unchangeability” (in the face of usage) is desirable. With software you want the contrary. Software that cannot be changed is a waste. SoI for software means “changeability”. You want to be sure that the software you buy/order today can be changed, adapted, improved over an unforseeable number of years so as fit changes in its usage environment. But that´s not the only reason why the SoI aspect is special. On top of changeability[1] (or evolvability) comes immeasurability. Evolvability cannot readily be measured by counting something. Whether the changeability is as high as the customer wants it, cannot be determined by looking at metrics like Lines of Code or Cyclomatic Complexity or Afferent Coupling. They may give a hint… but they are far, far from precise. That´s because of the nature of changeability. It´s different from performance or scalability. Also it´s because a customer cannot tell upfront, “how much” evolvability he/she wants. Whether requirements regarding Functionality (F) and Q have been met, a customer can tell you very quickly and very precisely. A calculation is missing, the calculation takes too long, the calculation time degrades with increased load, the calculation is accessible to the wrong users etc. That´s all very or at least comparatively easy to determine. But changeability… That´s a whole different thing. Nevertheless over time the customer will develop a feedling if changeability is good enough or degrading. He/she just has to check the development of the frequency of “WTF”s from developers ;-) F and Q are “timeless” requirement categories. Customers want us to deliver on them now. Just focusing on the now, though, is rarely beneficial in the long run. So SoI adds a counterweight to the requirements picture. Customers want SoI - whether they know it or not, whether they state if explicitly or not. In closing A customer´s requirements are not monolithic. They are not all made the same. Rather they fall into different categories. We as developers need to recognize these categories when confronted with some requirement - and take them into account. Only then can we make true professional decisions, i.e. conscious and responsible ones. I call this fundamental trait of software “changeability” and not “flexibility” to distinguish to whom it´s a concern. “Flexibility” to me means, software as is can easily be adapted to a change in its environment, e.g. by tweaking some config data or adding a library which gets picked up by a plug-in engine. “Flexibiltiy” thus is a matter of some user. “Changeability”, on the other hand, to me means, software can easily be changed in its structure to adapt it to new requirements. That´s a matter of the software developer. ?

    Read the article

  • Windows and SQL Azure Best Practices: Affinity Groups

    - by BuckWoody
    When you create a Windows Azure application, you’ll pick a subscription to put it under. This is a billing container - underneath that, you’ll deploy a Hosted Service. That holds the Web and Worker Roles that you’ll deploy for your applications. along side that, you use the Storage Account to create storage for the application. (In some cases, you might choose to use only storage or Roles - the info here applies anyway) As you are setting up your environment, you’re asked to pick a “region” where your application will run. If you choose a Region, you’ll be asked where to put the Roles. You’re given choices like Asia, North America and so on. This is where the hardware that physically runs your code lives. We have lots of fault domains, power considerations and so on to keep that set of datacenters running, but keep in mind that this is where the application lives. You also get this selection for Storage Accounts. When you make new storage, it’s a best practice to put it where your computing is. This makes the shortest path from the code to the data, and then back out to the user. One of the selections for the location is “Anywhere U.S.”. This selection might be interpreted to mean that we will bias towards keeping the data and the code together, but that may not be the case. There is a specific abstraction we created for just that purpose: Affinity Groups. An Affinity Group is simply a name you can use to tie together resources. You can do this in two places - when you’re creating the Hosted Service (shown above) and on it’s own tree item on the left, called “Affinity Groups”. When you select either of those actions, You’re presented with a dialog box that allows you to specify a name, and then the Region that  names ties the resources to. Now you can select that Affinity Group just as if it were a Region, and your code and data will stay together. That helps with keeping the performance high. Official Documentation: http://msdn.microsoft.com/en-us/library/windowsazure/hh531560.aspx

    Read the article

  • How Does a Touch Screen Phone Work? [Chart]

    - by Asian Angel
    There are three types of touch screen technologies available in today’s touch screen phones: resistive, capacitive, and infra-red. Learn about the different benefits and capabilities of each and make a more informed decision about your next mobile phone selection with this helpful chart. How Does a Touch Screen Phone Work? [via GraphJam] 8 Deadly Commands You Should Never Run on Linux 14 Special Google Searches That Show Instant Answers How To Create a Customized Windows 7 Installation Disc With Integrated Updates

    Read the article

  • Base de Datos Oracle, su mejor opción para reducir costos de IT

    - by Ivan Hassig
    Por Victoria Cadavid Sr. Sales Cosultant Oracle Direct Uno de los principales desafíos en la administración de centros de datos es la reducción de costos de operación. A medida que las compañías crecen y los proveedores de tecnología ofrecen soluciones cada vez más robustas, conservar el equilibrio entre desempeño, soporte al negocio y gestión del Costo Total de Propiedad es un desafío cada vez mayor para los Gerentes de Tecnología y para los Administradores de Centros de Datos. Las estrategias más comunes para conseguir reducción en los costos de administración de Centros de Datos y en la gestión de Tecnología de una organización en general, se enfocan en la mejora del desempeño de las aplicaciones, reducción del costo de administración y adquisición de hardware, reducción de los costos de almacenamiento, aumento de la productividad en la administración de las Bases de Datos y mejora en la atención de requerimientos y prestación de servicios de mesa de ayuda, sin embargo, las estrategias de reducción de costos deben contemplar también la reducción de costos asociados a pérdida y robo de información, cumplimiento regulatorio, generación de valor y continuidad del negocio, que comúnmente se conciben como iniciativas aisladas que no siempre se adelantan con el ánimo de apoyar la reducción de costos. Una iniciativa integral de reducción de costos de TI, debe contemplar cada uno de los factores que  generan costo y pueden ser optimizados. En este artículo queremos abordar la reducción de costos de tecnología a partir de la adopción del que según los expertos es el motor de Base de Datos # del mercado.Durante años, la base de datos Oracle ha sido reconocida por su velocidad, confiabilidad, seguridad y capacidad para soportar cargas de datos tanto de aplicaciones altamente transaccionales, como de Bodegas de datos e incluso análisis de Big Data , ofreciendo alto desempeño y facilidades de administración, sin embrago, cuando pensamos en proyectos de reducción de costos de IT, además de la capacidad para soportar aplicaciones (incluso aplicaciones altamente transaccionales) con alto desempeño, pensamos en procesos de automatización, optimización de recursos, consolidación, virtualización e incluso alternativas más cómodas de licenciamiento. La Base de Datos Oracle está diseñada para proveer todas las capacidades que un área de tecnología necesita para reducir costos, adaptándose a los diferentes escenarios de negocio y a las capacidades y características de cada organización.Es así, como además del motor de Base de Datos, Oracle ofrece una serie de soluciones para optimizar la administración de la información a través de mecanismos de optimización del uso del storage, continuidad del Negocio, consolidación de infraestructura, seguridad y administración automática, que propenden por un mejor uso de los recursos de tecnología, ofrecen opciones avanzadas de configuración y direccionan la reducción de los tiempos de las tareas operativas más comunes. Una de las opciones de la base de datos que se pueden provechar para reducir costos de hardware es Oracle Real Application Clusters. Esta solución de clustering permite que varios servidores (incluso servidores de bajo costo) trabajen en conjunto para soportar Grids o Nubes Privadas de Bases de Datos, proporcionando los beneficios de la consolidación de infraestructura, los esquemas de alta disponibilidad, rápido desempeño y escalabilidad por demanda, haciendo que el aprovisionamiento, el mantenimiento de las bases de datos y la adición de nuevos nodos se lleve e cabo de una forma más rápida y con menos riesgo, además de apalancar las inversiones en servidores de menor costo. Otra de las soluciones que promueven la reducción de costos de Tecnología es Oracle In-Memory Database Cache que permite almacenar y procesar datos en la memoria de las aplicaciones, permitiendo el máximo aprovechamiento de los recursos de procesamiento de la capa media, lo que cobra mucho valor en escenarios de alta transaccionalidad. De este modo se saca el mayor provecho de los recursos de procesamiento evitando crecimiento innecesario en recursos de hardware. Otra de las formas de evitar inversiones innecesarias en hardware, aprovechando los recursos existentes, incluso en escenarios de alto crecimiento de los volúmenes de información es la compresión de los datos. Oracle Advanced Compression permite comprimir hasta 4 veces los diferentes tipos de datos, mejorando la capacidad de almacenamiento, sin comprometer el desempeño de las aplicaciones. Desde el lado del almacenamiento también se pueden conseguir reducciones importantes de los costos de IT. En este escenario, la tecnología propia de la base de Datos Oracle ofrece capacidades de Administración Automática del Almacenamiento que no solo permiten una distribución óptima de los datos en los discos físicos para garantizar el máximo desempeño, sino que facilitan el aprovisionamiento y la remoción de discos defectuosos y ofrecen balanceo y mirroring, garantizando el uso máximo de cada uno de los dispositivos y la disponibilidad de los datos. Otra de las soluciones que facilitan la administración del almacenamiento es Oracle Partitioning, una opción de la Base de Datos que permite dividir grandes tablas en estructuras más pequeñas. Esta aproximación facilita la administración del ciclo de vida de la información y permite por ejemplo, separar los datos históricos (que generalmente se convierten en información de solo lectura y no tienen un alto volumen de consulta) y enviarlos a un almacenamiento de bajo costos, conservando la data activa en dispositivos de almacenamiento más ágiles. Adicionalmente, Oracle Partitioning facilita la administración de las bases de datos que tienen un gran volumen de registros y mejora el desempeño de la base de datos gracias a la posibilidad de optimizar las consultas haciendo uso únicamente de las particiones relevantes de una tabla o índice en el proceso de búsqueda. Otros factores adicionales, que pueden generar costos innecesarios a los departamentos de Tecnología son: La pérdida, corrupción o robo de datos y la falta de disponibilidad de las aplicaciones para dar soporte al negocio. Para evitar este tipo de situaciones que pueden acarrear multas y pérdida de negocios y de dinero, Oracle ofrece soluciones que permiten proteger y auditar la base de datos, recuperar la información en caso de corrupción o ejecución de acciones que comprometan la integridad de la información y soluciones que permitan garantizar que la información de las aplicaciones tenga una disponibilidad de 7x24. Ya hablamos de los beneficios de Oracle RAC, para facilitar los procesos de Consolidación y mejorar el desempeño de las aplicaciones, sin embrago esta solución, es sumamente útil en escenarios dónde las organizaciones de quieren garantizar una alta disponibilidad de la información, ante fallo de los servidores o en eventos de desconexión planeada para realizar labores de mantenimiento. Además de Oracle RAC, existen soluciones como Oracle Data Guard y Active Data Guard que permiten replicar de forma automática las bases de datos hacia un centro de datos de contingencia, permitiendo una recuperación inmediata ante eventos que deshabiliten por completo un centro de datos. Además de lo anterior, Active Data Guard, permite aprovechar la base de datos de contingencia para realizar labores de consulta, mejorando el desempeño de las aplicaciones. Desde el punto de vista de mejora en la seguridad, Oracle cuenta con soluciones como Advanced security que permite encriptar los datos y los canales a través de los cueles se comparte la información, Total Recall, que permite visualizar los cambios realizados a la base de datos en un momento determinado del tiempo, para evitar pérdida y corrupción de datos, Database Vault que permite restringir el acceso de los usuarios privilegiados a información confidencial, Audit Vault, que permite verificar quién hizo qué y cuándo dentro de las bases de datos de una organización y Oracle Data Masking que permite enmascarar los datos para garantizar la protección de la información sensible y el cumplimiento de las políticas y normas relacionadas con protección de información confidencial, por ejemplo, mientras las aplicaciones pasan del ambiente de desarrollo al ambiente de producción. Como mencionamos en un comienzo, las iniciativas de reducción de costos de tecnología deben apalancarse en estrategias que contemplen los diferentes factores que puedan generar sobre costos, los factores de riesgo que puedan acarrear costos no previsto, el aprovechamiento de los recursos actuales, para evitar inversiones innecesarias y los factores de optimización que permitan el máximo aprovechamiento de las inversiones actuales. Como vimos, todas estas iniciativas pueden ser abordadas haciendo uso de la tecnología de Oracle a nivel de Base de Datos, lo más importante es detectar los puntos críticos a nivel de riesgo, diagnosticar las proporción en que están siendo aprovechados los recursos actuales y definir las prioridades de la organización y del área de IT, para así dar inicio a todas aquellas iniciativas que de forma gradual, van a evitar sobrecostos e inversiones innecesarias, proporcionando un mayor apoyo al negocio y un impacto significativo en la productividad de la organización. Más información http://www.oracle.com/lad/products/database/index.html?ssSourceSiteId=otnes 1Fuente: Market Share: All Software Markets, Worldwide 2011 by Colleen Graham, Joanne Correia, David Coyle, Fabrizio Biscotti, Matthew Cheung, Ruggero Contu, Yanna Dharmasthira, Tom Eid, Chad Eschinger, Bianca Granetto, Hai Hong Swinehart, Sharon Mertz, Chris Pang, Asheesh Raina, Dan Sommer, Bhavish Sood, Marianne D'Aquila, Laurie Wurster and Jie Zhang. - March 29, 2012 2Big Data: Información recopilada desde fuentes no tradicionales como blogs, redes sociales, email, sensores, fotografías, grabaciones en video, etc. que normalmente se encuentran de forma no estructurada y en un gran volumen

    Read the article

  • Daily tech links for .net and related technologies - Mar 26-28, 2010

    - by SanjeevAgarwal
    Daily tech links for .net and related technologies - Mar 26-28, 2010 Web Development Creating Rich View Components in ASP.NET MVC - manzurrashid Diagnosing ASP.NET MVC Problems - Brad Wilson Templated Helpers & Custom Model Binders in ASP.NET MVC 2 - gshackles The jQuery Templating Plugin and Why You Should Be Excited! - Chris Love Web Deployment Made Awesome: If You're Using XCopy, You're Doing It Wrong - Scott Hansleman Dynamic User Specific CSS Selection at Run Time - Misfit Geek Sending email...(read more)

    Read the article

  • Add Notes to Zoho Notebook in Firefox

    - by Asian Angel
    As you browse the web during the day, you probably find items that catch your interest and would like to save. The Zoho Notebook Helper extension for Firefox provides an easy way to add those items to your Zoho account. Using Zoho Notebook Helper Using the extension is easy and straightforward. Highlight the text, images, and links that you want to save, right click and select Add to Zoho Notebook. Note: It is recommended that you leave your status bar visible while using the extension. You can choose to add the selection to a new or pre-existing notebook or page. We created a new page for our example. Once your selection has been added to your account, you can see how nicely the formatting is retained. Notice the link at the top of the note…clicking on it will open the original webpage in a new tab if clicked on. The notebook mini pane can also pop out into a separate window if needed. You can resize the new external window as desired and send it back to your browser when ready. You can see an even better view of how well the formatting with regard to images, etc. is retained here. A quick look inside our notebook account and the notes that were just added. A second example added to our notebook account using a newly created page. As you build up the number of notebooks and pages, you can easily navigate between them using the drop-down menu in the mini pane’s upper right corner. Two new sets of notes each with their own page displaying nicely in our online account. The ease of use makes this a must-have extension for Zoho fans. Keep in mind that the extension will be temporarily disabled if you have your online account open in a tab. Conclusion Zoho Office doesn’t get much love compared to other online office solutions like Google Docs, or the new Microsoft Web Apps. However, if you are a Zoho user, the Zoho Notebook Helper extension makes it very easy to add those notes, links, and images to your online account for later reference. Links Install the Zoho Notebook Helper extension (Zoho Website) Similar Articles Productive Geek Tips Get Organized with AM-Notebook LiteAdd Notes to Google Notebook from ChromeGeek Reviews: Manage And Organize Notes With EvernoteAdd Sticky Notes to Any Page with Internote for FirefoxCreate Notes Inside (and Outside) of Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips CloudBerry Online Backup 1.5 for Windows Home Server Snagit 10 VMware Workstation 7 Acronis Online Backup Sculptris 1.0, 3D Drawing app AceStock, a Tiny Desktop Quote Monitor Gmail Button Addon (Firefox) Hyperwords addon (Firefox) Backup Outlook 2010 Daily Motivator (Firefox)

    Read the article

  • IDC Analyst Mike Fauscette Writes About Oracle And The Cloud

    - by Roxana Babiciu
    "It's becoming clear that cloud is now a core part of Oracle's strategy," says analyst Michael Fauscette in his post-OpenWorld article in Seeking Alpha. He believes we have a well-rounded portfolio "with a cloud platform/infrastructure, a broad selection of apps, and a partner marketplace." From his numerous conversations with customers, he highlights their continual interest in hybrid deployments and also in shifting apps to the cloud. Read more.

    Read the article

  • IDC Analyst Mike Fauscette Writes About Oracle and The Cloud

    - by Cinzia Mascanzoni
    "It's becoming clear that cloud is now a core part of Oracle's strategy," says analyst Michael Fauscette in his post-OpenWorld article in Seeking Alpha. He believes we have a well-rounded portfolio "with a cloud platform/infrastructure, a broad selection of apps, and a partner marketplace." From his numerous conversations with customers, he highlights their continual interest in hybrid deployments and also in shifting apps to the cloud. Read more.

    Read the article

  • Java Space on Parleys

    - by Yolande Poirier
    Now available! A great selection of JavaOne 2010 and JVM Language Summit 2010 sessions as well as Oracle Technology Network TechCasts on the new Java Space on Parleys website. Oracle partnered with Stephan Janssen, founder of Parleys to make this happen. Parleys website offers a user friendly experience to view online content. You can download some of the talks to your desktop or watch them on the go on mobile devices. The current selection is a well of expertise from top Java luminaries and Oracle experts. JavaOne 2010 sessions: ·        Best practices for signing code by Sean Mullan   ·        Building software using rich client platforms by Rickard Thulin ·        Developing beyond the component libraries by Ryan Cuprak ·        Java API for keyhole markup language by Florian Bachmann ·        Avoiding common user experience anti-patterns by Burk Hufnagel ·        Accelerating Java workloads via GPUs by Gary Frost JVM Languages Summit 2010 sessions: ·      Mixed language project compilation in Eclipse by Andy Clement  ·      Gathering the threads by John Rose  ·      LINQ: language features for concurrency by Neal Gafter  ·      Improvements in OpenJDK useful for JVM languages by Eric Caspole  ·      Symmetric Multilanguage - VM Architecture by Oleg Pliss  Special interviews with Oracle experts on product innovations: ·      Ludovic Champenois, Java EE architect on Glassfish 3.1 and Java EE. ·      John Jullion-Ceccarelli and Martin Ryzl on NetBeans IDE 6.9 You can chose to listen to a section of talks using the agenda view and search for related content while watching a presentation.  Enjoy the Java content and vote on it! 

    Read the article

< Previous Page | 126 127 128 129 130 131 132 133 134 135 136 137  | Next Page >