Search Results

Search found 24350 results on 974 pages for 'bug a lot'.

Page 511/974 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • NDC Oslo

    - by Alan Smith
    Originally posted on: http://geekswithblogs.net/asmith/archive/2013/06/14/153136.aspx2013 has been a hectic year for conference presentations so far, NDC in Oslo has been the 6th conference I have attended, and my session there was my 11th conference presentation this year. I have been meaning to make the short trip over from Stockholm to NDC for a few years, and this was the first time I made it. I have heard a lot of great things about the event, and was impressed with the location, the sessions, and most of all the atmosphere around the event boots and during the party on Thursday evening. The session I was delivering was my “Grid Computing with 256 Windows Azure Worker Roles & Kinect” demo, which I have delivered at many events over the past 12 months. The demo went fine. I’m always a little nervous when I try to scale out the application to 256 worker roles, it almost always works well and the application will scale in minutes, but very occasionally there can be a longer delay due to the provisioning process in the Windows Azure data centers. This would not be an issue for many scenarios, but when standing on stage in front of a room full of developers you really want things to run smoothly. A number of people have suggested that I should pre-provision an environment so that it is guaranteed to be there when I run the demo during a session. For me the aim has always been to show the rapid scalability on cloud-based platforms live on stage. Pre-provisioning an environment may make for a more reliable demo but to me that would be cheating, and not half as much fun!

    Read the article

  • How to select all labeled messages in Gmail inbox?

    - by Tony
    I have dozens of labels and sub-labels and dozens of filters. I use the 'skip inbox' on a lot of filters but have many more that I want to see in the inbox before I archive them. It's easy to shift-select a few dozen at a time to archive when they are NOT separated by non-labeled mail. What I want to do is to be able to select all mail that has any label in the inbox and archive it with one click so that nothing but unlabeled mail is left in the inbox. This doesn't seem to be any different than what the Select Starred search except I can't find a wildcard search operator for labels.

    Read the article

  • Best arguments for/against introducing ORM technology into a companies dev process

    - by james
    I have started using ORM technology in the last few years. My first exposure was NHibernate. I then moved onto Linq 2 Sql, and Entity Framework. The issue I have however is, there are some organisations where I have found strong opposition to introducing ORM tools. They usually have a number of reasons: they have a lot of built up SQL skills in the team, and are worried about the underlying SQL that ORM's generate. they have DBA's who like to be able to see the SQL an app uses in order that can review it for best practice. they are worried about performance (some people have "heard" the ORM's aren't as performant but have no real proof themselves - there may well be some truth in this! :). So, I'm looking for the best or most convincing arguments that you have put forward FOR the use of ORM tools. Equally, I would be interested in the against arguments too. Note: this is NOT a discussion over which ORM I should use.

    Read the article

  • jtreg update, December 2012

    - by jjg
    There is a new version of jtreg available. The primary new feature is support for tests that have been written for use with TestNG, the popular open source testing framework. TestNG is supported by a variety of tools and plugins, which means that it is now possible to develop tests for OpenJDK using those tools, while still retaining the ability to have the tests be part of the OpenJDK test suite, and run with a single test harness, jtreg. jtreg can be downloaded from the OpenJDK jtreg page: http://openjdk.java.net/jtreg. TestNG support jtreg supports both single TestNG tests, which can be freely intermixed with other types of jtreg tests, and groups of TestNG tests. A single TestNG test class can be compiled and run by providing a test description using the new action tag: @run testng classname The test will be executed by using org.testng.TestNG. No main method is required. A group of TestNG tests organized in a standard package hierarchy can also be compiled and run by jtreg. Any such group must be identified by specifying the root directory of the package hierarchy. You can either do this in the top level TEST.ROOT file, or in a TEST.properties file in any subdirectory enclosing the group of tests. In either case, add a line to the file of the form: TestNG.dirs = dir ... Directories beginning with '/' are evaluated relative to the root directory of the test suite; otherwise they are evaluated relative to the directory containing the declaring file. In particular, note that you can simply use "TestNG.dirs = ." in a TEST.properties file in the root directory of the test group's package hierarchy. No additional test descriptions are necessary, but test descriptions containing information tags, such as @bug, @summary, etc are permitted. All the Java source files in the group will be compiled if necessary, before any of the tests in the group are run. The selected tests within the group will be run, one at a time, using org.testng.TestNG. Library classes The specification for the @library tag has been extended so that any paths beginning with '/' will be evaluated relative to the root directory of the test suite. In addition, some bugs have been fixed that prevented sharing the compiled versions of library classes between tests in different directories. Note: This has uncovered some issues in tests that use a combination of @build and @library tags, such that some tests may fail unexpectedly with ClassNotFoundException. The workaround for now is to ensure that library classes are listed before the test classes in any @build tags. To specify one or more library directories for a group of TestNG tests, add a line of the following form to the TEST.properties file in the root directory of the group's package hierarchy: lib.dirs = dir ... As before, directories beginning with '/' are evaluated relative to the root directory of the test suite; otherwise they are evaluated relative to the directory containing the declaring file. The libraries will be available to all classes in the group; you cannot specify different libraries for different tests within the group. Coming soon ... From this point on, jtreg development will be using the new jtreg repository in the OpenJDK code-tools project. There is a new email alias jtreg-dev at openjdk.java.net for discussions about jtreg development. The existing alias jtreg-use at openjdk.java.net will continue to be available for questions about using jtreg. For more information ... An updated version of the jtreg Tag Language Specification is being prepared, and will be made available when it is ready. In the meantime, you can find more information about the support for TestNG by executing the following command: $ jtreg -onlinehelp TestNG For more information on TestNG itself, visit testng.org.

    Read the article

  • Which protocol do clients use when communicating with servers in a SAN

    - by Mario De Schaepmeester
    I'm trying to wrap my head around how a SAN works and how it is implemented. If I understand this well, clients wanting to access the storage devices in a SAN need to communicate with the servers via the LAN. When the SAN is implemented with Fibre Channel, these servers are Fibre Channel compliant devices, and internally in the SAN they work with the Fibre Channel Protocol. Both data and communications are supported by Fibre Channel. But which application-layer protocol do the clients use in the LAN to communicate with the servers? Is the data simply transferred via ethernet as well? This is some part I am stuck on. I went trough a lot of sources but most sources don't really mention protocols and if they do, they only mention FCP.

    Read the article

  • How should I determine my rates for writing custom software?

    - by Carson Myers
    For a custom software that will likely take a year or more to develop, how would I go about determining what to charge as a consultant? I'm having a hard time coming up with a number, and searches online are providing vastly different numbers (between $55/hr and $300/hr). I don't want to shoot too low because it's going to take me so much time (and I'm deferring my education for this project). I also don't want to shoot too high and get unpleasant looks and demand for justification. FWIW I live in Canada, and have approx. 10 years of development experience. I've read the "take your salary and divide it by 1000" rule of thumb, but the thing is I don't have a salary. Currently I'm just doing fairly small programming tasks for a friend who is starting a marketing company, pricing each task fairly arbitrarily. I don't know what I would make over the course of a year doing it, but it would be incredibly low. My responsibilities for the project would be the architecture, programming, database, server, and UX to some degree. It's going to be a public facing web service so I will also need to put a lot of effort into security and scalability. Any advice or experience?

    Read the article

  • Connect to RDS inside a VPC using Opsworks located in another VPC

    - by Consuelo Merino
    I have a RDS instance (mysql) inside a VPC called vpc-a (10.0.0.0/16). This instance is private, it can only be accessed from vpc-a. We created a stack on opsworks inside another VPC called vpc-b (10.1.0.0). We want to connect opsworks to the RDS but it doesn't work. It refuses to connect. I tried adding said subnet to the RDS security group. Also read a lot of documentation but I haven't stumbled across the answer. Any help would be greatly appreciated.

    Read the article

  • Why does Windows Calculator always start minimised?

    - by wows
    I often start Calculator by pressing the little Calculator button on my MS keyboard - but for some stupid reason it always starts minimized - meaning I have to restore it first before I can use it. Any idea why it would do this? It appears to start fine if I start it any other way. Update: I've just realized this only happens when Visual Studio 2008 is in focus - which is a lot of the time on my machine. Weird! Still annoying though. Update 2: OK, so it's not just Visual Studio - but Calculator starts without focus in all programs. Some programs it starts under, and some programs it starts over the top, but still not focused - so you have to click on the program itself or on the taskbar.

    Read the article

  • Cut in excel doesn't work, and copying tables from one program to another returns text

    - by Kristina
    My excel 2007 on Windows 7 operating system seems to have a probelm with regular cut function. when I highlight cells I want to cut and press cut (either on keyboard shortcut Ctrl+x, Home menu cut command, or from the right-click menu) cells start flashing for a split second and after that they only turn normal. When I want to paste them, they past as if copy function was used. If I try to rightclick to use function "insert cut cells" it is not one of the offered options at all. On my home computer I have same combination, Excel 2007 on windows 7 and it works just fine. COuld the problem be due to 64-bit win7 version at my job, and 32-bit version at home? Another problem is when I copy table from excel to word, in word pasting results in unformatted text instead of table as it was in excel. Did someone have such problems and can offer a solution? Thanx a lot.

    Read the article

  • traffic shaping for certain (local) users

    - by JMW
    Hello, i'm using ubuntu 10.10 i've a local backup user called "backup". :) i would like to give this user just a bandwidth of 1Mbit. No matter which software wants to connect to the network. this solution doesn't work: iptables -t mangle -A OUTPUT -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 iptables -t mangle -A POSTROUTING -p tcp -m owner --uid-owner 1001 -j MARK --set-mark 12 tc qdisc del dev eth0 root tc qdisc add dev eth0 root handle 2 htb default 1 tc filter add dev eth0 parent 2: protocol ip pref 2 handle 50 fw classid 2:6 tc class add dev eth0 parent 2: classid 2:6 htb rate 10Kbit ceil 1Mbit tc qdisc show dev eth0 tc class show dev eth0 tc filter show dev eth0 does anyone know how to do it? thanks a lot in advance

    Read the article

  • Creating an expandable, cross-platform compatible program "core".

    - by Thomas Clayson
    Hi there. Basically the brief is relatively simple. We need to create a program core. An engine that will power all sorts of programs with a large number of distinct potential applications and deployments. The core will be an analytics and algorithmic processor which will essentially take user-specific input and output scenarios based on the information it gets, whilst recording this information for reporting. It needs to be cross platform compatible. Something that can have platform specific layers put on top which can interface with the core. It also needs to be able to be expandable, for instance, modular with developers being able to write "add-ons" or "extensions" which can alter the function of the end program and can use the core to its full extent. (For instance, a good example of what I'm looking to create is a browser. It has its main core, the web-kit engine, for instance, and then on top of this is has a platform-specific GUI and can also have add-ons and extensions which can change the behavior of the program.) Our problem is that the extensions need to interface directly with the main core and expand/alter that functionality rather than the platform specific "layer". So, given that I have no experience in this whatsoever (I have a PHP background and recently objective-c), where should I start, and is there any knowledge/wisdom you can impart on me please? Thanks for all the help and advice you can give me. :) If you need any more explanation just ask. At the moment its in the very early stages of development, so we're just researching all possible routes of development. Thanks a lot

    Read the article

  • Puppet, Nagios, Munin on cPanel based hosts

    - by WinkyWolly
    I've been managing 20-30~ cPanel based hosts over the past year with Puppet, Nagios and Munin for general monitoring / trending however a lot of the methods I've had to use to deploy / manage things such as configurations a pain. For those of you who aren't familiar with cPanel - it adds a few things to yum exclude such as perl*, ruby* and so forth. This causes issues with me being able to bootstrap monitoring on a new server via Puppet (well via the Package type) due to a bunch of conflicts with installing via Yum. Now I could create a custom RPM for everything and remove certain dependencies from the spec file however I would like to avoid this if possible. Does anyone have any proposed functional ways to manage this sort of environment? Currently I install Puppet, Facter and Munin via RPM's and force install using --nodeps and such (since they're installed, just no the ones Yum wants). Nagios I installed manually from source at this time (likely will create RPM's however I want to tackle this general issue first).

    Read the article

  • nVidia Driver - Laptop is forced as one of my monitors

    - by vaccano
    I am trying to get my nVidia Driver to correctly configure my multi-monitor setup. I have my laptop in a docking station and two monitors hooked up to it. I had an old driver that this worked correctly with. However, that driver was causing a lot of "Deferred Procedure Calls" so I upgraded to a newer driver. But now I am forced to use my Laptop monitor as one of my monitors. Here is the image in the nVida Control Panel: As you can see, both monitors are recognized, but the only options available are to use one of them with the Laptop Display. Any Ideas? I am running Windows XP (latest updates), I have an nVidia Quadro 1500M. I have tried several different driver versions and all the new ones have this issue.

    Read the article

  • PeopleSoft RECONNECT Conference Unites the PeopleSoft Community

    - by Marc Weintraub
    The PeopleSoft team is looking forward to participating in this new PeopleSoft deep dive conference from the Quest International Users Group.  We’ve worked diligently with the leadership of Quest’s PeopleSoft Special Interest Groups (SIG’s) and Regional User Groups (RUG’s) to make sure this national user event delivers PeopleSoft content that meets the needs of the PeopleSoft community. The inaugural PeopleSoft RECONNECT conference will be held August 27-29, 2012 in Hartford Connecticut.  Through our Product Strategy, Development and Support teams Oracle will provide support for education sessions in these key tracks: Human Capital Management (HCM) Financials (FMS) Supplier Relationship Management (SRM) Supply Chain, Manufacturing & Distribution (SCM) Project Costing Applications Technology (PeopleTools) Oracle will host a general session from John Webb, plus roadmap sessions for the major PeopleSoft product areas.  We will also host enhancement discussions for our key PeopleSoft solutions allowing participants to contribute to the future of PeopleSoft through an interactive forum.  All of this is part of the 100+ education sessions being offered by the customer and vendor community.   There’s a lot of buzz around this conference, so don’t delay in registering key members of your team today.  We look forward to seeing you there so register NOW!

    Read the article

  • How can I limit the upload/download bandwidth on my CentOS server?

    - by Dan Nestor
    How can I limit the upload and download bandwidth on my CentOS server? This is a box with a single interface, eth0. Ideally, I would like a command-line solution (I've been trying to use tc), something that I could easily switch on and off in a script. So far I've been trying to do something like tc filter add dev eth0 protocol ip prio 50 u32 police rate 100kbit burst 10240 drop but I'm obviously missing a lot of knowledge and information. Can somebody help with a quick one-liner? Many thanks, Dan

    Read the article

  • Is there any way to make cherokee server portable?

    - by Tom
    I develop on different machines. I use MAMP, I have it installed on my dropbox folder and created symbolic links to the applications folder. That way if I work one day on my desktop and make changes to let's say a database schema and next day I work from my laptop I won't have to do any db migration stuff the same applies for all the apache virtual hosts I have setup using MAMP. Everything is portable. I recently started using Cherokee server and I like it a lot. I would like to replace MAMP with Cherokee but first I need to be able to make it portable. I don't want to have to configure multiple virtual hosts, settings, etc., on multiple machines. Is there any way I can set up Cherokee to be as portable as MAMP? What if I want to run Cherokee from a thumbdrive?

    Read the article

  • What are the best Microsoft Certifications to start with?

    - by emragins
    Background I have a bachelors in math and a certification. in C++ from 2007. Since then I've spent a lot of time working with python, C#, and started going through the ASP.NET certification materials. I'm starting to realize that the certification is going to take longer than anticipated and I'm not sure I want to spend the next 4-5 months studying before I have it completed. Most of resume shows teaching/tutoring experience with some low-level administration thrown in. Question If I want to get any programming position, which certifications would be best to start with? What would be the quickest and easiest to obtain, yet represent value for my employer? Are certifications even the way to go? If not, what would you suggest? Update I have several programs that I show off when I can (mostly games), and I'm about 75% through a C# application I hope to have done in the next week. Since most employers simply ask for a resume and not samples, what would be the best way to present the work to them?

    Read the article

  • Blogging from Office RT

    - by Dennis Vroegop
    During the last Build conference all attendees were given a brand new sparkling exciting Surface RT device (I love that machine despite its name but that's beside the point). On it came a version of Office 2013 RT, or better: the preview version. Now, I translated that term "Preview" to "Beta". Which is OK, since I've been using a lot of beta products from Microsoft and they all were great. And then I wanted to post a blogposting from Word. I knew I could, I have been doing this for a long time (I prefer Live Writer but that isn't available on Windows 8 RT). So I wrote the entry and hit "Publish". Instead of my blogsite I got a nice non-descriptive error telling me I couldn't post. So I fired up my other (Intel based) Win8 tablet, opened Word RT Preview, it loaded my blogpost (you've got to love the automatic synchronization through Skydrive) and tried from that machine. Same error. So, I installed Live Writer (remember, the other machine is Intel based) and posted from there. That worked like a charm. Apparently, there was something wrong with Word. I gave up and didn't think about it anymore. Yet… what you're reading now is written in Word 2013 RT on my Surface RT. So what did do? Simple: I updated from the Preview version to the final version. That's all there was to it. So…. If you're still on the preview I urge you to upgrade. You need to go to the "classic desktop update" window instead of going through the Windows Store App style update since Office is a desktop system, but once you do that you'll have the full version as well. Happy blogging!

    Read the article

  • What does this RPC error message mean?

    - by user161834
    I have OS RHNL release 6.2 and use the NFS service (nfs-utils-1.2.3) to connect to NFS server, And found a lot of messages in a file /var/log/messages: Apr 1 11:08:35 XXX rpc.idmapd[3010]: nss_getpwnam: name '2' does not map into domain 'XXXX.com' Apr 1 11:14:26 XXX rpc.idmapd[3010]: nss_getpwnam: name '0' does not map into domain 'XXXX.com' Apr 1 11:18:36 XXX rpc.idmapd[3010]: nss_getpwnam: name '2' does not map into domain 'XXXX.com' Apr 1 11:24:27 XXX rpc.idmapd[3010]: nss_getpwnam: name '0' does not map into domain 'XXXX.com' Apr 1 11:28:37 XXX rpc.idmapd[3010]: nss_getpwnam: name '2' does not map into domain 'XXXX.com' Apr 1 11:34:27 XXX rpc.idmapd[3010]: nss_getpwnam: name '0' does not map into domain 'XXXX.com' Apr 1 11:38:37 XXX rpc.idmapd[3010]: nss_getpwnam: name '2' does not map into domain 'XXXX.com' Apr 1 11:44:28 XXX rpc.idmapd[3010]: nss_getpwnam: name '0' does not map into domain 'XXXX.com' Apr 1 11:48:37 XXX rpc.idmapd[3010]: nss_getpwnam: name '2' does not map into domain 'XXXX.com' What does this message mean ?

    Read the article

  • PowerDNS: multiple supermasters and transfering domain

    - by blauwblaatje
    Hi, I've got a setup with multiple supermasters (bind) and multiple superslaves (pdns). It all seems to work just fine, pdns is being updated when I'm adding or changing a domain. But, when I want to migrate a domain from one master to another, pdns doesn't like it. It tells me the new server isn't a master for this domain, although I deleted the domain on the old server. Now, I think that part of the problem is, that pdns doesn't get an update when a domain is deleted, which would also explain a lot of dead domains in my pdns. It looks like the slave is constantly polling a server and getting RCODE=5 back. The master isn't aware of the domain and the slave thinks the master still serves that domain. Anyone familiar with this problem?

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • Finding a way to simplify complex queries on legacy application

    - by glenatron
    I am working with an existing application built on Rails 3.1/MySql with much of the work taking place in a JavaScript interface, although the actual platforms are not tremendously relevant here, except in that they give context. The application is powerful, handles a reasonable amount of data and works well. As the number of customers using it and the complexity of the projects they create increases, however, we are starting to run into a few performance problems. As far as I can tell, the source of these problems is that the data represents a tree and it is very hard for ActiveRecord to deterministically know what data it should be retrieving. My model has many relationships like this: Project has_many Nodes has_many GlobalConditions Node has_one Parent has_many Nodes has_many WeightingFactors through NodeFactors has_many Tags through NodeTags GlobalCondition has_many Nodes ( referenced by Id, rather than replicating tree ) WeightingFactor has_many Nodes through NodeFactors Tag has_many Nodes through NodeTags The whole system has something in the region of thirty types which optionally hang off one or many nodes in the tree. My question is: What can I do to retrieve and construct this data faster? Having worked a lot with .Net, if I was in a similar situation there, I would look at building up a Stored Procedure to pull everything out of the database in one go but I would prefer to keep my logic in the application and from what I can tell it would be hard to take the queried data and build ActiveRecord objects from it without losing their integrity, which would cause more problems than it solves. It has also occurred to me that I could bunch the data up and send some of it across asynchronously, which would not improve performance but would improve the user perception of performance. However if sections of the data appeared after page load that could also be quite confusing. I am wondering whether it would be a useful strategy to make everything aware of it's parent project, so that one could pull all the records for that project and then build up the relationships later, but given the ubiquity of complex trees in day to day programming life I wouldn't be surprised if there were some better design patterns or standard approaches to this type of situation that I am not well versed in.

    Read the article

  • Is this a normal operating temperature range for a New Third-Generation MacBook Air (2.13 MHz, 120 G

    - by doug
    Even with just my text editor open (no web browser) and maybe the terminal, the baseline temperature is usually above 40 C. When i open 4-5 browser tabs in Safari (even if none of Sites have Flash) the temp can quickly go over 50 C. (In addition, i am observing these temps even though i have turned the fan up to 3000 rpms). (i have install smcFanControl on my MBA so i can see the temp in the menu bar.) So this means my MBA is running much warmer than my MBP; and in practice, it means that i have to be very careful how i use my MBA. Of course if i load a Site with Flash, it just freaks out, and often quickly goes above 65 C (I've installed a flash blocker to avoid this). Is anyone else observing this behavior? I have checked the Apple boards and sure enough, there are a lot of complaints, but nothing from Apple.

    Read the article

  • Forgot to unmount/eject external hard drive, lost moved files. Mac OS X

    - by balupton
    So I was using my Mac with my external hard drive connected via USB. I moved about 10 GB of data to it (via drag and drop while holding down the Command key to move the files rather than to copy them). They moved to the drive all right, but as I was having some issues and the Finder crashed after the transfer, I was unable to eject the volume and later everything froze so I had to do a hard restart (hold the power button). When I remounted the volume (plugged the external hard drive back in) it no longer had any of the files which I moved onto it. As it was a lot of data, how can I recover these files?

    Read the article

  • The Fantastic New WebLogic on Oracle Database Appliance 2.9 Release is Here!

    - by JuergenKress
    Last week was a big day in virtualised ODA-land as it saw the launch of WebLogic on ODA 2.9. Admittedly it doesn't sound like a very exciting release but it is one that we at O-box have been looking forward to for quite some time. Let me explain why, then we'll look into the details... The ODA X4-2 has 48 Intel Xeon cores. That is a lot of compute power. Whilst the largest O-box SOA Appliance single environment configuration can in theory use all those cores (currently with 40 vCPU of SOA!) the vast majority of O-box users will want smaller configurations. Prior to 2.9 the Oracle WebLogic implementation only supported one domain per ODA, so the conundrum O-box development faced last year was either: offer customers only one SOA environment on their O-box for now (but have the benefit of a standard, easily supportable WebLogic installation), or build our own WebLogic/OTD OVM templates from scratch. One of our driving goals with O-box is to give the best possible experience and make the appliance as supportable as possible. Therefore we took the gamble that we would stick with the Oracle's one-domain WebLogic configuration initially, and just hope that it would deliver multi-domain support for us in a timely manner (note: this is probably not a strategy that business textbooks would recommend!). Anyway, we've been working closely with Oracle Product Management for a few months now and I'm delighted to see 2.9 as the fruits of their labour. This also neatly ties in with several recent requests for O-box to include OSB as well as SOA/BPEL (which we have always wanted to have in separate domains). The diagram below is the neatest way to summarise what the new 2.9 release will allow us to deliver, i.e. previously only one 3D box was possible: Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: oBox,WebLogic on ODA,ODA,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >