Search Results

Search found 61 results on 3 pages for 'dating'.

Page 3/3 | < Previous Page | 1 2 3 

  • Flex 4 front end connecting to Java Jersey Web Service

    - by user305801
    I created a Java REST service using Jersey. I use three of the HTTP "verbs" GET, POST and DELETE. I want to create several prototype front ends for the service. After much research, a lot dating to 2008 and 2009, I have been unable to find anything remotely simple. My three options are: 1) resthttpservice. This project hasn't been updated in a year. The only activity are one off suggestions that individual users have implemented. http://code.google.com/p/resthttpservice/ 2) Create an AIR application. This isn't unfeasible. 3) Writing my own socket level code but there is a security restriction with flash players and I need to implement a policy server. I have already read the question posted about asking whether using Flex for REST services were worth it. That information is old as well. I want to introduce REST services to my company but Flex's limited support for HTTP PUT and DELETE are discouraging. My service also uses the Accept header to determine if JSON or XML will be returned to the client. I can't seem to change HTTP headers without doing socket programming. I'm fine with that but the security policy thing is annoying. Is there an easy way to use Flex 4 with RESTful services that uses PUT/DELETE and the Accept HTTP header? Please help. I'm very frustrated.

    Read the article

  • PHP Thumbnail Image Resizing with proportions

    - by Sam
    Hi all. As a brief run down, I am currently making a dating type site. Users can create accounts and upload profile pictures (up to 8). In order to display these in the browse area of the website, I am looking for a way in PHP (with third party processor/scripts) to resize all images uploaded to have thumbnails that adhere to certain dimensions. As an example, I will want "profile" images (thumbnails) to be NO larger than 120*150px. The scripting needs to resize uploaded images (regardless of whether they are portrait or landscape, and regardless of proportions) to adhere to these dimensions without getting stretched. The width (eg. 120pixels) should always remain the same, but the height (eg. 150px) can vary in order to keep the image in proportion. If it's a landscape photo, I'm assuming the script would need to take a chunk out of the middle of the image? The reason that all images to be resized is so that when profiles are display in a grid that all thumbnails are roughly the same size. Any input would be greatly appreciated.

    Read the article

  • C89, Mixing Variable Declarations and Code

    - by rutski
    I'm very curious to know why exactly C89 compilers will dump on you when you try to mix variable declarations and code, like this for example: rutski@imac:~$ cat test.c #include <stdio.h> int main(void) { printf("Hello World!\n"); int x = 7; printf("%d!\n", x); return 0; } rutski@imac:~$ gcc -std=c89 -pedantic test.c test.c: In function ‘main’: test.c:7: warning: ISO C90 forbids mixed declarations and code rutski@imac:~$ Yes, you can avoid this sort of thing by staying away from -pedantic. But then your code is no longer standards compliant. And as anybody capable of answering this post probably already knows, this is not just a theoretical concern. Platforms like Microsoft's C compiler enforce this quick in the standard under any and all circumstances. Given how ancient C is, I would imagine that this feature is due to some historical issue dating back to the extraordinary hardware limitations of the 70's, but I don't know the details. Or am I totally wrong there?

    Read the article

  • Being pressured to GOTO the dark-side

    - by Dan McG
    We have a situation at work where developers working on a legacy (core) system are being pressured into using GOTO statements when adding new features into existing code that is already infected with spagetti code. Now, I understand there may be arguments for using 'just one little GOTO' instead of spending the time on refactoring to a more maintainable solution. The issue is, this isolated 'just one little GOTO' isn't so isolated. At least once every week or so there is a new 'one little GOTO' to add. This codebase is already a horror to work with due to code dating back to or before 1984 being riddled with GOTOs that would make many Pastafarians believe it was inspired by the Flying Spagetti Monster itself. Unfortunately the language this is written in doesn't have any ready made refactoring tools, so it makes it harder to push the 'Refactor to increase productivity later' because short-term wins are the only wins paid attention to here... Has anyone else experienced this issue whereby everybody agrees that we cannot be adding new GOTOs to jump 2000 lines to a random section, but continually have Anaylsts insist on doing it just this one time and having management approve it? tldr; How can one go about addressing the issue of developers being pressured (forced) to continually add GOTO statements (by add, I mean add to jump to random sections many lines away) because it 'gets that feature in quicker'? I'm beginning to fear we may loses valuable developers to the raptors over this...

    Read the article

  • MySQL left outer join is slow

    - by Ryan Doherty
    Hi, hoping to get some help with this query, I've worked at it for a while now and can't get it any faster: SELECT date, count(id) as 'visits' FROM dates LEFT OUTER JOIN visits ON (dates.date = DATE(visits.start) and account_id = 40 ) WHERE date >= '2010-12-13' AND date <= '2011-1-13' GROUP BY date ORDER BY date ASC That query takes about 8 seconds to run. I've added indexes on dates.date, visits.start, visits.account_id and visits.start+visits.account_id and can't get it to run any faster. Table structure (only showing relevant columns in visit table): create table visits ( `id` int(11) NOT NULL AUTO_INCREMENT, `account_id` int(11) NOT NULL, `start` DATETIME NOT NULL, `end` DATETIME NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8; CREATE TABLE `dates` ( `date` date NOT NULL, PRIMARY KEY (`date`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1; dates table contains all days from 2010-1-1 to 2020-1-1 (~3k rows). visits table contains about 400k rows dating from 2010-6-1 to yesterday. I'm using the date table so the join will return 0 visits for days there were no visits. Results I want for reference: +------------+--------+ | date | visits | +------------+--------+ | 2010-12-13 | 301 | | 2010-12-14 | 356 | | 2010-12-15 | 423 | | 2010-12-16 | 332 | | 2010-12-17 | 346 | | 2010-12-18 | 226 | | 2010-12-19 | 213 | | 2010-12-20 | 311 | | 2010-12-21 | 273 | | 2010-12-22 | 286 | | 2010-12-23 | 241 | | 2010-12-24 | 149 | | 2010-12-25 | 102 | | 2010-12-26 | 174 | | 2010-12-27 | 258 | | 2010-12-28 | 348 | | 2010-12-29 | 392 | | 2010-12-30 | 395 | | 2010-12-31 | 278 | | 2011-01-01 | 241 | | 2011-01-02 | 295 | | 2011-01-03 | 369 | | 2011-01-04 | 438 | | 2011-01-05 | 393 | | 2011-01-06 | 368 | | 2011-01-07 | 435 | | 2011-01-08 | 313 | | 2011-01-09 | 250 | | 2011-01-10 | 345 | | 2011-01-11 | 387 | | 2011-01-12 | 0 | | 2011-01-13 | 0 | +------------+--------+ Thanks in advance for any help!

    Read the article

  • Windows 7 is shutting down unexpectedly, according to the logs.

    - by dlamblin
    Here's a message from my eventvwr EventLog (Windows Logs System): The previous system shutdown at 11:51:15 AM on ?7/?29/?2009 was unexpected. This is funny because I was wondering why the system shut down while I was playing Civilizations IV full screen. Now I know. It was unexpected. Has anyone encountered and resolved this? A little background: I am running Windows 7 RC inside VMWare Fusion 2 (just updated a few months back) on a MacBook (Bitterly not Pro) aluminum-body. Windows 7 occasionally will shut down. This isn't a quick turn-off, it's a shutdown where all the programs are exited, the system waits until they quit (and Civ4 doesn't prompt me to save), it even installed Windows Updates before restarting. And yes it is restarting right after the shutdown. Because I run a game in full screen mode I do not notice any dialog with a countdown timer or anything like that that might be a warning. As I have iStat on my dashboard widgets I can see about 8 temperature monitors. I have seen the CPU get up to 74C before, but during the shutdown, though it seemed hot to the touch (always is), it read 61C for the CPU, 60C for heatsink A, 50C for heatsink B and in the 30s-40s for the enclosure and harddrives. As I type this now, the temps are actually higher, so I don't think the temperature caused it. I have at least six such events dating first from 5/17 which was a week after installing Windows 7. I did find one information level warning from USER32 in the system log that says: The process C:\Windows\system32\svchost.exe (DLAMBLIN-WIN7) has initiated the restart of computer DLAMBLIN-WIN7 on behalf of user NT AUTHORITY\SYSTEM for the following reason: Operating System: Recovery (Planned) Reason Code: 0x80020002 Shutdown Type: restart Comment: And another 15 minutes before that from Windows Update: Restart Required: To complete the installation of the following updates, the computer will be restarted within 15 minutes: - Cumulative Security Update for Internet Explorer 8 for Windows 7 Release Candidate for x64-based Systems (KB972260) Which I think kind of explains it. Though I don't know why restarting after an update would create an error event of "shutdown was unexpected", isn't that pretty odd? Now, how do I set it to never restart after an update unless I click something. Application of solution: As fretje reminded me, there's a couple of configurable settings for this, in windows 7 they're much in the same place as in Windows 2000 SP3 and XP SP1. Running gpedit.msc pops up a window that looks like: Windows 7 has changed the order and added a couple of newer options I've italicized: Do not display 'Install Updates and Shut Down' in Shut Down Windows dialog box Do not adjust default option to 'Install Updates and Shut Down' in Shut Down Windows dialog box Enabling Windows Power Management to automatically wake up the system to install scheduled updates Configure Automatic Updates Specify intranet Microsoft update service location Automatic Updates detection frequency Allow non-administrators to receive update notifications Turn on Software Notifications Allow Automatic Updates immediate installation Turn on recommended updates via Automatic Updates No auto-restart with logged-on users for scheduled Automatic Updates Re-prompt for restart with scheduled installations. Delay Restart for scheduled installations Reschedule Automatic Updates schedule

    Read the article

  • Batch file to uninstall all Sun Java versions?

    - by Ricket
    I'm setting up a system to keep Java in our office up to date. Everyone has all different versions of Java, many of them old and insecure, and some dating back as far as 1.4. I have a System Center Essentials server which can push out and silently run a .msi file, and I've already tested that it can install the latest Java. But old versions (such as 1.4) aren't removed by the installer, so I need to uninstall them. Everyone is running Windows XP. The neat coincidence is that Sun just got bought by Oracle and Oracle has now changed all the instances of "Sun" to "Oracle" in Java. So, I can conveniently not have to worry about uninstalling the latest Java, because I can just do a search and uninstall all Sun Java programs. I found the following batch script on a forum post which looked promising: @echo off & cls Rem List all Installation subkeys from uninstall key. echo Searching Registry for Java Installs for /f %%I in ('reg query HKLM\SOFTWARE\microsoft\windows\currentversion\uninstall') do echo %%I | find "{" > nul && call :All-Installations %%I echo Search Complete.. goto :EOF :All-Installations Rem Filter out all but the Sun Installations for /f "tokens=2*" %%T in ('reg query %1 /v Publisher 2^> nul') do echo %%U | find "Sun" > nul && call :Sun-Installations %1 goto :EOF :Sun-Installations Rem Filter out all but the Sun-Java Installations. Note the tilda + n, which drops all the subkeys from the path for /f "tokens=2*" %%T in ('reg query %1 /v DisplayName 2^> nul') do echo . Uninstalling - %%U: | find "Java" && call :Sun-Java-Installs %~n1 goto :EOF :Sun-Java-Installs Rem Run Uninstaller for the installation MsiExec.exe /x%1 /qb echo . Uninstall Complete, Resuming Search.. goto :EOF However, when I run the script, I get the following output: Searching Registry for Java Installs 'DEV_24x6' is not recognized as an internal or external command, operable program or batch file. 'SUBSYS_542214F1' is not recognized as an internal or external command, operable program or batch file. And then it appears to hang and I ctrl-c to stop it. Reading through the script, I don't understand everything, but I don't know why it is trying to run pieces of registry keys as programs. What is wrong with the batch script? How can I fix it, so that I can move on to somehow turning it into a MSI and deploying it to everyone to clean up this office? Or alternatively, can you suggest a better solution or existing MSI file to do what I need? I just want to make sure to get all the old versions of Java off of everyone's computers, since I've heard of exploits that cause web pages to load using old versions of Java and I want to avoid those.

    Read the article

  • Smarter, Faster, Cheaper: The Insurance Industry’s Dream

    - by Jenna Danko
    On June 3rd, I saw the Gaylord Resort Centre in Washington D.C. become the hub of C level executives and managers of insurance carriers for the IASA 2013 Conference.  Insurance Accounting/Regulation and Technology sessions took the focus, but there were plenty of tertiary sessions for career development, which complemented the overall strong networking side of the conference.  As an exhibitor, Oracle, along with several hundred other product providers, welcomed the opportunity to display and demonstrate our solutions and we were encouraged by hustle and bustle of the exhibition floor.  The IASA organizers had pre-arranged fast track tours whereby interested conference delegates could sign up for a series of like-themed presentations from Vendors, giving them a level of 'Speed Dating' introductions to possible solutions and services.  Oracle participated in a number of these, which were very well subscribed.  Clearly, the conference had a strong business focus; however, attendees saw technology as a key enabler to get their processes done smarter, faster and cheaper.  As we navigated through the exhibition, it became clear from the inquiries that came to us that insurance carriers are gravitating to a number of focus areas: Navigating the maze of upcoming regulatory reporting changes. For US carriers with European holdings, Solvency II carries a myriad of rules and reporting requirements. Alignment across the globe of the Own Risk and Solvency Assessment (ORSA) processes brings to the fore the National Insurance of Insurance commissioners' (NAIC) recent guidance manual publication. Doing more with less and to certainly expect more from technology for less dollars. The overall cost of IT, in particular hardware, has dropped in real terms (though the appetite for more has risen: more CPU, more RAM, more storage), but software has seen less change. Clearly, customers expect either to pay less or get a lot more from their software solutions for the same buck. Doing things smarter – A recognition that with the advance of technology to stand still no longer means you are technically going backwards. Technology and, in particular technology interactions with human business processes, has undergone incredible change over the past 5 years. Consumer usage (iPhones, etc.) has been at the forefront, but now at the Enterprise level ever more effective technology exploitation is beginning to take place. That data and, in particular gleaning knowledge from data, is refining and improving business processes.  Organizations are now consuming more data than ever before, and it is set to grow exponentially for some time to come.  Amassing large volumes of data is one thing, but effectively analyzing that data is another.  It is the results of such analysis that leads to improvements both in terms of insurance product offerings and the processes to support them. Regulatory Compliance, damned if you do and damned if you don’t! Clearly, around the globe at lot is changing from a regulatory perspective and it is evident that in terms of regulatory requirements, whilst there is a greater convergence across jurisdictions bringing uniformity, there is also a lot of work to be done in the next 5 years. Just like the big data, hidden behind effective regulatory compliance there often lies golden nuggets that can give competitive advantages. From Oracle's perspective, our Rating Engine, Billing, Document Management and Insurance Analytics solutions on display served to strike up good conversations and, as is always the case at conferences, it was a great opportunity to meet and speak with existing Oracle customers that we might not have otherwise caught up with for a while. Fortunately, I was able to catch up on a few sessions at the close of the Exhibition.  The speaker quality was high and the audience asked challenging, but pertinent, questions.  During Dr. Jackie Freiberg’s keynote “Bye Bye Business as Usual,” the author discussed 8 strategies to help leaders create a culture where teams consistently deliver innovative ideas by disrupting the status quo.  The very first strategy: Get wired for innovation.  Freiberg admitted that folks in the insurance and financial services industry understand and know innovation is important, but oftentimes they are slow adopters.  Today, technology and innovation go hand in hand. In speaking to delegates during and after the conference, a high degree of satisfaction could be measured from their positive comments of speaker sessions and the exhibitors. I suspect many will be back in 2014 with Indianapolis as the conference location. Did you attend the IASA Conference in Washington D.C.?  If so, I would love to hear your comments. Andrew Collins is the Director, Solvency II of Oracle Financial Services. He can be reached at andrew.collins AT oracle.com.

    Read the article

  • Oracle's Global Single Schema

    - by david.butler(at)oracle.com
    Maximizing business process efficiencies in a heterogeneous environment is very difficult. The difficulty stems from the fact that the various applications across the Information Technology (IT) landscape employ different integration standards, different message passing strategies, and different workflow engines. Vendors such as Oracle and others are delivering tools to help IT organizations manage the complexities introduced by these differences. But the one remaining intractable problem impacting efficient operations is the fact that these applications have different definitions for the same business data. Business data is your business information codified for computer programs to use. A good data model will represent the way your organization does business. The computer applications your organization deploys to improve operational efficiency are built to operate on the business data organized into this schema.  If the schema does not represent how you do business, the applications on that schema cannot provide the features you need to achieve the desired efficiencies. Business processes span these applications. Data problems break these processes rendering them far less efficient than they need to be to achieve organization goals. Thus, the expected return on the investment in these applications is never realized. The success of all business processes depends on the availability of accurate master data.  Clearly, the solution to this problem is to consolidate all the master data an organization uses to run its business. Then clean it up, augment it, govern it, and connect it back to the applications that need it. Until now, this obvious solution has been difficult to achieve because no one had defined a data model sufficiently broad, deep and flexible enough to support transaction processing on all key business entities and serve as a master superset to all other operational data models deployed in heterogeneous IT environments. Today, the situation has changed. Oracle has created an operational data model (aka schema) that can support accurate and consistent master data across heterogeneous IT systems. This is foundational for providing a way to consolidate and integrate master data without having to replace investments in existing applications. This Global Single Schema (GSS) represents a revolutionary breakthrough that allows for true master data consolidation. Oracle has deep knowledge of applications dating back to the early 1990s.  It developed applications in the areas of Supply Chain Management (SCM), Product Lifecycle Management (PLM), Enterprise Resource Planning (ERP), Customer Relationship Management (CRM), Human Capital Management (HCM), Financials and Manufacturing. In addition, Oracle applications were delivered for key industries such as Communications, Financial Services, Retail, Public Sector, High Tech Manufacturing (HTM) and more. Expertise in all these areas drove requirements for GSS. The following figure illustrates Oracle's unique position that enabled the creation of the Global Single Schema. GSS Requirements Gathering GSS defines all the key business entities and attributes including Customers, Contacts, Suppliers, Accounts, Products, Services, Materials, Employees, Installed Base, Sites, Assets, and Inventory to name just a few. In addition, Oracle delivers GSS pre-integrated with a wide variety of operational applications.  Business Process Automation EBusiness is about maximizing operational efficiency. At the highest level, these 'operations' span all that you do as an organization.  The following figure illustrates some of these high-level business processes. Enterprise Business Processes Supplies are procured. Assets are maintained. Materials are stored. Inventory is accumulated. Products and Services are engineered, produced and sold. Customers are serviced. And across this entire spectrum, Employees do the procuring, supporting, engineering, producing, selling and servicing. Not shown, but not to be overlooked, are the accounting and the financial processes associated with all this procuring, manufacturing, and selling activity. Supporting all these applications is the master data. When this data is fragmented and inconsistent, the business processes fail and inefficiencies multiply. But imagine having all the data under these operational business processes in one place. ·            The same accurate and timely customer data will be provided to all your operational applications from the call center to the point of sale. ·            The same accurate and timely supplier data will be provided to all your operational applications from supply chain planning to procurement. ·            The same accurate and timely product information will be available to all your operational applications from demand chain planning to marketing. You would have a single version of the truth about your assets, financial information, customers, suppliers, employees, products and services to support your business automation processes as they flow across your business applications. All company and partner personnel will access the same exact data entity across all your channels and across all your lines of business. Oracle's Global Single Schema enables this vision of a single version of the truth across the heterogeneous operational applications supporting the entire enterprise. Global Single Schema Oracle's Global Single Schema organizes hundreds of thousands of attributes into 165 major schema objects supporting over 180 business application modules. It is designed for international operations, and extensibility.  The schema is delivered with a full set of public Application Programming Interfaces (APIs) and an Integration Repository with modern Service Oriented Architecture interfaces to make data available as a services (DaaS) to business processes and enable operations in heterogeneous IT environments. ·         Key tables can be extended with unlimited numbers of additional attributes and attribute groups for maximum flexibility.  o    This enables model extensions that reflect business entities unique to your organization's operations. ·         The schema is multi-organization enabled so data manipulation can be controlled along organizational boundaries. ·         It uses variable byte Unicode to support over 31 languages. ·         The schema encodes flexible date and flexible address formats for easy localizations. No matter how complex your business is, Oracle's Global Single Schema can hold your business objects and support your global operations. Oracle's Global Single Schema identifies and defines the business objects an enterprise needs within the context of its business operations. The interrelationships between the business objects are also contained within the GSS data model. Their presence expresses fundamental business rules for the interaction between business entities. The following figure illustrates some of these connections.   Interconnected Business Entities Interconnecte business processes require interconnected business data. No other MDM vendor has this capability. Everyone else has either one entity they can master or separate disconnected models for various business entities. Higher level integrations are made available, but that is a weak architectural alternative to data level integration in this critically important aspect of Master Data Management.    

    Read the article

  • ASMLib

    - by wcoekaer
    Oracle ASMlib on Linux has been a topic of discussion a number of times since it was released way back when in 2004. There is a lot of confusion around it and certainly a lot of misinformation out there for no good reason. Let me try to give a bit of history around Oracle ASMLib. Oracle ASMLib was introduced at the time Oracle released Oracle Database 10g R1. 10gR1 introduced a very cool important new features called Oracle ASM (Automatic Storage Management). A very simplistic description would be that this is a very sophisticated volume manager for Oracle data. Give your devices directly to the ASM instance and we manage the storage for you, clustered, highly available, redundant, performance, etc, etc... We recommend using Oracle ASM for all database deployments, single instance or clustered (RAC). The ASM instance manages the storage and every Oracle server process opens and operates on the storage devices like it would open and operate on regular datafiles or raw devices. So by default since 10gR1 up to today, we do not interact differently with ASM managed block devices than we did before with a datafile being mapped to a raw device. All of this is without ASMLib, so ignore that one for now. Standard Oracle on any platform that we support (Linux, Windows, Solaris, AIX, ...) does it the exact same way. You start an ASM instance, it handles storage management, all the database instances use and open that storage and read/write from/to it. There are no extra pieces of software needed, including on Linux. ASM is fully functional and selfcontained without any other components. In order for the admin to provide a raw device to ASM or to the database, it has to have persistent device naming. If you booted up a server where a raw disk was named /dev/sdf and you give it to ASM (or even just creating a tablespace without asm on that device with datafile '/dev/sdf') and next time you boot up and that device is now /dev/sdg, you end up with an error. Just like you can't just change datafile names, you can't change device filenames without telling the database, or ASM. persistent device naming on Linux, especially back in those days ways to say it bluntly, a nightmare. In fact there were a number of issues (dating back to 2004) : Linux async IO wasn't pretty persistent device naming including permissions (had to be owned by oracle and the dba group) was very, very difficult to manage system resource usage in terms of open file descriptors So given the above, we tried to find a way to make this easier on the admins, in many ways, similar to why we started working on OCFS a few years earlier - how can we make life easier for the admins on Linux. A feature of Oracle ASM is the ability for third parties to write an extension using what's called ASMLib. It is possible for any third party OS or storage vendor to write a library using a specific Oracle defined interface that gets used by the ASM instance and by the database instance when available. This interface offered 2 components : Define an IO interface - allow any IO to the devices to go through ASMLib Define device discovery - implement an external way of discovering, labeling devices to provide to ASM and the Oracle database instance This is similar to a library that a number of companies have implemented over many years called libODM (Oracle Disk Manager). ODM was specified many years before we introduced ASM and allowed third party vendors to implement their own IO routines so that the database would use this library if installed and make use of the library open/read/write/close,.. routines instead of the standard OS interfaces. PolyServe back in the day used this to optimize their storage solution, Veritas used (and I believe still uses) this for their filesystem. It basically allowed, in particular, filesystem vendors to write libraries that could optimize access to their storage or filesystem.. so ASMLib was not something new, it was basically based on the same model. You have libodm for just database access, you have libasm for asm/database access. Since this library interface existed, we decided to do a reference implementation on Linux. We wrote an ASMLib for Linux that could be used on any Linux platform and other vendors could see how this worked and potentially implement their own solution. As I mentioned earlier, ASMLib and ODMLib are libraries for third party extensions. ASMLib for Linux, since it was a reference implementation implemented both interfaces, the storage discovery part and the IO part. There are 2 components : Oracle ASMLib - the userspace library with config tools (a shared object and some scripts) oracleasm.ko - a kernel module that implements the asm device for /dev/oracleasm/* The userspace library is a binary-only module since it links with and contains Oracle header files but is generic, we only have one asm library for the various Linux platforms. This library is opened by Oracle ASM and by Oracle database processes and this library interacts with the OS through the asm device (/dev/asm). It can install on Oracle Linux, on SuSE SLES, on Red Hat RHEL,.. The library itself doesn't actually care much about the OS version, the kernel module and device cares. The support tools are simple scripts that allow the admin to label devices and scan for disks and devices. This way you can say create an ASM disk label foo on, currently /dev/sdf... So if /dev/sdf disappears and next time is /dev/sdg, we just scan for the label foo and we discover it as /dev/sdg and life goes on without any worry. Also, when the database needs access to the device, we don't have to worry about file permissions or anything it will be taken care of. So it's a convenience thing. The kernel module oracleasm.ko is a Linux kernel module/device driver. It implements a device /dev/oracleasm/* and any and all IO goes through ASMLib - /dev/oracleasm. This kernel module is obviously a very specific Oracle related device driver but it was released under the GPL v2 so anyone could easily build it for their Linux distribution kernels. Advantages for using ASMLib : A good async IO interface for the database, the entire IO interface is based on an optimal ASYNC model for performance A single file descriptor per Oracle process, not one per device or datafile per process reducing # of open filehandles overhead Device scanning and labeling built-in so you do not have to worry about messing with udev or devlabel, permissions or the likes which can be very complex and error prone. Just like with OCFS and OCFS2, each kernel version (major or minor) has to get a new version of the device drivers. We started out building the oracleasm kernel module rpms for many distributions, SLES (in fact in the early days still even for this thing called United Linux) and RHEL. The driver didn't make sense to get pushed into upstream Linux because it's unique and specific to the Oracle database. As it takes a huge effort in terms of build infrastructure and QA and release management to build kernel modules for every architecture, every linux distribution and every major and minor version we worked with the vendors to get them to add this tiny kernel module to their infrastructure. (60k source code file). The folks at SuSE understood this was good for them and their customers and us and added it to SLES. So every build coming from SuSE for SLES contains the oracleasm.ko module. We weren't as successful with other vendors so for quite some time we continued to build it for RHEL and of course as we introduced Oracle Linux end of 2006 also for Oracle Linux. With Oracle Linux it became easy for us because we just added the code to our build system and as we churned out Oracle Linux kernels whether it was for a public release or for customers that needed a one off fix where they also used asmlib, we didn't have to do any extra work it was just all nicely integrated. With the introduction of Oracle Linux's Unbreakable Enterprise Kernel and our interest in being able to exploit ASMLib more, we started working on a very exciting project called Data Integrity. Oracle (Martin Petersen in particular) worked for many years with the T10 standards committee and storage vendors and implemented Linux kernel support for DIF/DIX, data protection in the Linux kernel, note to those that wonder, yes it's all in mainline Linux and under the GPL. This basically gave us all the features in the Linux kernel to checksum a data block, send it to the storage adapter, which can then validate that block and checksum in firmware before it sends it over the wire to the storage array, which can then do another checksum and to the actual DISK which does a final validation before writing the block to the physical media. So what was missing was the ability for a userspace application (read: Oracle RDBMS) to write a block which then has a checksum and validation all the way down to the disk. application to disk. Because we have ASMLib we had an entry into the Linux kernel and Martin added support in ASMLib (kernel driver + userspace) for this functionality. Now, this is all based on relatively current Linux kernels, the oracleasm kernel module depends on the main kernel to have support for it so we can make use of it. Thanks to UEK and us having the ability to ship a more modern, current version of the Linux kernel we were able to introduce this feature into ASMLib for Linux from Oracle. This combined with the fact that we build the asm kernel module when we build every single UEK kernel allowed us to continue improving ASMLib and provide it to our customers. So today, we (Oracle) provide Oracle ASMLib for Oracle Linux and in particular on the Unbreakable Enterprise Kernel. We did the build/testing/delivery of ASMLib for RHEL until RHEL5 but since RHEL6 decided that it was too much effort for us to also maintain all the build and test environments for RHEL and we did not have the ability to use the latest kernel features to introduce the Data Integrity features and we didn't want to end up with multiple versions of asmlib as maintained by us. SuSE SLES still builds and comes with the oracleasm module and they do all the work and RHAT it certainly welcome to do the same. They don't have to rebuild the userspace library, it's really about the kernel module. And finally to re-iterate a few important things : Oracle ASM does not in any way require ASMLib to function completely. ASMlib is a small set of extensions, in particular to make device management easier but there are no extra features exposed through Oracle ASM with ASMLib enabled or disabled. Often customers confuse ASMLib with ASM. again, ASM exists on every Oracle supported OS and on every supported Linux OS, SLES, RHEL, OL withoutASMLib Oracle ASMLib userspace is available for OTN and the kernel module is shipped along with OL/UEK for every build and by SuSE for SLES for every of their builds ASMLib kernel module was built by us for RHEL4 and RHEL5 but we do not build it for RHEL6, nor for the OL6 RHCK kernel. Only for UEK ASMLib for Linux is/was a reference implementation for any third party vendor to be able to offer, if they want to, their own version for their own OS or storage ASMLib as provided by Oracle for Linux continues to be enhanced and evolve and for the kernel module we use UEK as the base OS kernel hope this helps.

    Read the article

  • nginx - redirection doesn't work as expected

    - by Luis
    I have a domain listening on both http and https. I want to redirect all the traffic to https except for two specific locations. It works, but only for mydomain.com, not for www.mydomain.com. Here the config: upstream mydomain_rails { server unix:/home/deploy/mydomain/shared/pids/unicorn.sock; } # blog.mydomain.com server { listen 80; server_name blog.mydomain.com; rewrite ^ http://www.mydomain.com/de/blog permanent; } # blog.mydomain.com.br server { listen 80; server_name blog.mydomain.com.br; rewrite ^ http://www.mydomain.com/br/blog permanent; } # www.mydomain.de server { listen 80; server_name mydomain.de www.mydomain.de; rewrite ^ https://www.mydomain.com/de permanent; } # www.mydomain.com.br server { listen 80; server_name mydomain.com.br www.mydomain.com.br; rewrite ^ https://www.mydomain.com/br permanent; } server { listen 80; server_name mydomain.com; rewrite ^ http://www.mydomain.com$request_uri permanent; } ## www.mydomain.com ## Redirect http to https, keep blogs on plain http server { listen 80; server_name www.mydomain.com; location / { # if ($host ~* ^(www\.mydomain\.com)$ ) { rewrite ^/(.*)$ https://www.mydomain.com/$1 permanent; # } # return 444; } # Matches any request starting with '/br/blog' and proxies to the upstream blog instance location ~* /br/blog { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { rewrite ^/br/blog$ /; rewrite ^/br/blog/(.*)$ /$1; proxy_pass http://mydomain_blog_br; break; } } # Matches any request starting with '/de/blog' and proxies to the upstream blog instance location ~* /de/blog { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { rewrite ^/de/blog$ /; rewrite ^/de/blog/(.*)$ /$1; proxy_pass http://mydomain_blog; break; } } } # www.mydomain.com server { add_header Cache-Control "public, must-revalidate"; server_name mydomain.com www.mydomain.com; listen 443; ssl on; ssl_certificate /etc/ssl/mydomain.com/sslchain.crt; ssl_certificate_key /etc/ssl/mydomain.com/privatekey.key; ## Strict Transport Security (ForceHTTPS), max-age 30d add_header Strict-Transport-Security "max-age=2592000; includeSubdomains"; ## Due SSL encryption, rather to increase the keepalive requests and timeout keepalive_requests 10; keepalive_timeout 60 60; root /home/deploy/mydomain/current/public/; error_log /home/deploy/mydomain/shared/log/nginx.error.log info; access_log /home/deploy/mydomain/shared/log/nginx.access.log main; ## Redirect from non-www to www if ($host = 'mydomain.com' ) { rewrite ^/(.*)$ https://www.mydomain.com/$1 permanent; } ## Caching images for 3 months location ~* \.(ico|css|js|gif|jpe?g|png)\?[0-9]+$ { expires 30d; break; } ## Deny illegal Host headers if ($host !~* ^(mydomain.com|www.mydomain.com)$ ) { return 444; } ## Deny certain User-Agents (case insensitive) if ($http_user_agent ~* (Baiduspider|webalta|Wget|WordPress|youdao|jakarta) ) { return 444; } ## Deny certain Referers (case insensitive) if ($http_referer ~* (dating|diamond|forsale|girl|jewelry|nudit|poker|porn|poweroversoftware|sex|teen|webcam|zippo|zongdo) ) { return 444; } ## Enable maintenance page. The page is copied in during capistrano deployment set $maintenance 0; if (-f $document_root/index.html) { set $maintenance 1; } if ($request_uri ~* (jpg|jpeg|gif|png|js|css)$) { set $maintenance 0; } if ($maintenance) { rewrite ^(.*)$ /index.html last; break; } location /uk { auth_basic "Restricted"; auth_basic_user_file /etc/nginx/htpasswd; root /home/deploy/mydomain/current/public/; try_files $uri @fallback; } # Matches any request starting with '/br/blog' and proxies to the upstream blog instance location ^~ /br/blog { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { rewrite ^/br/blog$ /; rewrite ^/br/blog/(.*)$ /$1; proxy_pass http://mydomain_blog_br; break; } } # Matches any request starting with '/de/blog' and proxies to the upstream blog instance location ^~ /de/blog { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (!-f $request_filename) { rewrite ^/de/blog$ /; rewrite ^/de/blog/(.*)$ /$1; proxy_pass http://mydomain_blog; break; }} # Matches any request starting with '/lp' and proxies to the upstream blog instance location /lp { proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; rewrite ^/lp(/?.*)$ /$1; proxy_pass http://mydomain_landingpage; break; } #Matches any request, and looks for static files before reverse proxying to the upstream app server socket location / { root /home/deploy/mydomain/current/public/; try_files $uri @fallback; } # Called after the above pattern, if no static file is found location @fallback { proxy_set_header X-Sendfile-Type X-Accel-Redirect; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; proxy_pass http://mydomain_rails; } ## All other errors get the generic error page error_page 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 495 496 497 500 501 502 503 504 505 506 507 /500.html; location /500.html { root /home/deploy/mydomain/current/public/; } } I defined the blog upstream. As said, it works properly for mydomain.com, but not for www.mydomain.com. Any idea?

    Read the article

< Previous Page | 1 2 3