Search Results

Search found 1580 results on 64 pages for 'scheme'.

Page 50/64 | < Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >

  • Functional programming constructs in non-functional programming languages

    - by Giorgio
    This question has been going through my mind quite a lot lately and since I haven't found a convincing answer to it I would like to know if other users of this site have thought about it as well. In the recent years, even though OOP is still the most popular programming paradigm, functional programming is getting a lot of attention. I have only used OOP languages for my work (C++ and Java) but I am trying to learn some FP in my free time because I find it very interesting. So, I started learning Haskell three years ago and Scala last summer. I plan to learn some SML and Caml as well, and to brush up my (little) knowledge of Scheme. Well, a lot of plans (too ambitious?) but I hope I will find the time to learn at least the basics of FP during the next few years. What is important for me is how functional programming works and how / whether I can use it for some real projects. I have already developed small tools in Haskell. In spite of my strong interest for FP, I find it difficult to understand why functional programming constructs are being added to languages like C#, Java, C++, and so on. As a developer interested in FP, I find it more natural to use, say, Scala or Haskell, instead of waiting for the next FP feature to be added to my favourite non-FP language. In other words, why would I want to have only some FP in my originally non-FP language instead of looking for a language that has a better support for FP? For example, why should I be interested to have lambdas in Java if I can switch to Scala where I have much more FP concepts and access all the Java libraries anyway? Similarly: why do some FP in C# instead of using F# (to my knowledge, C# and F# can work together)? Java was designed to be OO. Fine. I can do OOP in Java (and I would like to keep using Java in that way). Scala was designed to support OOP + FP. Fine: I can use a mix of OOP and FP in Scala. Haskell was designed for FP: I can do FP in Haskell. If I need to tune the performance of a particular module, I can interface Haskell with some external routines in C. But why would I want to do OOP with just some basic FP in Java? So, my main point is: why are non-functional programming languages being extended with some functional concept? Shouldn't it be more comfortable (interesting, exciting, productive) to program in a language that has been designed from the very beginning to be functional or multi-paradigm? Don't different programming paradigms integrate better in a language that was designed for it than in a language in which one paradigm was only added later? The first explanation I could think of is that, since FP is a new concept (it isn't new at all, but it is new for many developers), it needs to be introduced gradually. However, I remember my switch from imperative to OOP: when I started to program in C++ (coming from Pascal and C) I really had to rethink the way in which I was coding, and to do it pretty fast. It was not gradual. So, this does not seem to be a good explanation to me. Also, I asked myself if my impression is just plainly wrong due to lack of knowledge. E.g., do C# and C++11 support FP as extensively as, say, Scala or Caml do? In this case, my question would be simply non-existent. Or can it be that many non-FP programmers are not really interested in using functional programming, but they find it practically convenient to adopt certain FP-idioms in their non-FP language? IMPORTANT NOTE Just in case (because I have seen several language wars on this site): I mentioned the languages I know better, this question is in no way meant to start comparisons between different programming languages to decide which is better / worse. Also, I am not interested in a comparison of OOP versus FP (pros and cons). The point I am interested in is to understand why FP is being introduced one bit at a time into existing languages that were not designed for it even though there exist languages that were / are specifically designed to support FP.

    Read the article

  • Iron Speed Designer 7.0 - the great gets greater!

    - by GGBlogger
    For Immediate Release Iron Speed, Inc. Kelly Fisher +1 (408) 228-3436 [email protected] http://www.ironspeed.com       Iron Speed Version 7.0 Generates SharePoint Applications New! Support for Microsoft SharePoint speeds application generation and deployment   San Jose, CA – June 8, 2010. Software development tools-maker Iron Speed, Inc. released Iron Speed Designer Version 7.0, the latest version of its popular Web 2.0 application generator. Iron Speed Designer generates rich, interactive database and reporting applications for .NET, Microsoft SharePoint and the Cloud.    In addition to .NET applications, Iron Speed Designer V7.0 generates database-driven SharePoint applications. The ability to quickly create database-driven applications for SharePoint eliminates a lot of work, helping IT departments generate productivity-enhancing applications in just a few hours.  Generated applications include integrated SharePoint application security and use SharePoint master pages.    “It’s virtually impossible to build database-driven application in SharePoint by hand. Iron Speed Designer V7.0 not only makes this possible, the tool makes it easy.” – Razi Mohiuddin, President, Iron Speed, Inc.     Integrated SharePoint application security Generated applications include integrated SharePoint application security. SharePoint sites and their groups are used to retrieve security roles. Iron Speed Designer validates the user against a Microsoft SharePoint server on your network by retrieving the logged in user’s credentials from the SharePoint Context.    “The Iron Speed Designer generated application integrates seamlessly with SharePoint security, removing the hassle of designing, testing and approving your own security layer.” -Michael Landi, Solutions Architect, Light Speed Solutions     SharePoint Solution Packages Iron Speed Designer V7.0 creates SharePoint Solution Packages (WSPs) for easy application deployment. Using the Deployment Wizard, a single application WSP is created and can be deployed to your SharePoint server.   “Iron Speed Designer is the first product on the market that allows easy and painless deployment of database-driven .NET web applications inside the SharePoint environment.” -Bryan Patrick, Developer, Pseudo Consulting     SharePoint master pages and themes In V7.0, generated applications use SharePoint master pages and contain the same content as other SharePoint pages. Generated applications use the current SharePoint color scheme and display standard SharePoint navigation controls on each page.   “Iron Speed Designer preserves the look and feel of the SharePoint environment in deployed database applications without additional hand-coding.” -Kirill Dmitriev, Software Developer, Iron Speed, Inc.     Iron Speed Designer Version 7.0 System Requirements Iron Speed Designer Version 7.0 runs on Microsoft Windows 7, Windows Vista, Windows XP, and Windows Server 2003 and 2008. It generates .NET Web applications for Microsoft SQL Server, Oracle, Microsoft Access and MySQL. These applications may be deployed on any machine running the .NET Framework. Iron Speed Designer supports Microsoft SharePoint 2007 and Windows SharePoint Services (WSS3). Find complete information about Iron Speed Designer Version 7.0 at www.ironspeed.com.     About Iron Speed, Inc. Iron Speed is the leader in enterprise-class application generation. Our software development tools generate database and reporting applications in significantly less time and cost than hand-coding. Our flagship product, Iron Speed Designer, is the fastest way to deliver applications for the Microsoft .NET and software-as-a-service cloud computing environments.   With products built on decades of experience in enterprise application development and large-scale e-commerce systems, Iron Speed products eliminate the need for developers to choose between "full featured" and "on schedule."   Founded in 1999, Iron Speed is well funded with a capital base of over $20M and strategic investors that include Arrow Electronics and Avnet, as well as executives from AMD, Excelan, Onsale, and Oracle. The company is based in San Jose, Calif., and is located online at www.ironspeed.com.

    Read the article

  • Brainless Backups

    - by Jesse
    I’m a software developer by trade which means to my friends and family I’m just a “computer guy”. It’s assumed that I know everything about every facet of computing from removing spyware to replacing hardware. I also can do all of this blindly over the phone or after hearing a five to ten word description of the problem over dinner ;-) In my position as CIO of my friends and families I’ve been in the unfortunate position of trying to recover music, pictures, or documents off of failed hard drives on more than one occasion. It’s not a great situation for anyone, and it’s always at these times that the importance of backups becomes so clear. Several months back a friend of mine found himself in this situation. The hard drive on his 8 year old laptop failed and took a good number of his digital photos with it. I think most folks can deal with losing some of their music and even some of their documents, but it really stings to lose pictures of past events and loved ones. After ordering a new laptop, my friend went out and bought an external hard drive so that he could start keeping a backup of his data. As fate would have it, several months later the drive in his new laptop failed and he learned the hard way that simply buying the external hard drive isn’t enough… you actually have to copy your stuff over every once in awhile! The importance of backup and recovery plans is (hopefully) well known in IT organizations. Well executed backup plans are in place, and hopefully the backup and recovery process is tested regularly. When you’re talking about users at home, however, the need for these backups is often understood far too late. Most typical users can’t be expected to remember to backup their data regularly and also don’t always have the know-how to setup automated backups. For my friends and family members in this situation I recommend tools like Dropbox, Carbonite, and Mozy. Here’s why I like them: They’re affordable: Dropbox and Mozy both have free offerings, though most people with lots of music and/or photos to backup will probably exceed the storage limitations of those free plans pretty quickly. Still, all three offer pretty affordable monthly or yearly plans. In my opinion, Carbonite’s unlimited storage plan for $50-$60 per year is the best value around. They’re easy to setup: Both Dropbox and Carbonite are very easy to get setup and start using. I’ve never used Mozy, but I imagine it’s similarly painless to get up and running. Backups are automatically “off-site”: A backup that is sitting on an external hard drive right next to your computer is great, but might not protect against flood damage, a power surge, or other disasters in that single location. These services exist “in the cloud” so to speak, helping mitigate those concerns. Granted, this kind of backup scheme requires some trust in the 3rd party to protect your data from both malicious people and disastrous events. This truly is a bit of a double edged sword, but I sleep well at night knowing that my data is being backed up and secured by a company made up of engineers that focus on the business of doing backups right. Backups are “brainless”: What I like most about services like these is that they work “automagically” in the background, watching for files to be updated and automatically backing up those changes. There’s no need to remember to plug in that external drive and copy your data over. Since starting to recommend these services to my friends and family I find myself wearing my “data recovery” hat far less often. The only way backups are effective for your standard computer user is if they’re completely automatic. Backups need to be brainless, or they just won’t work.

    Read the article

  • Release 17 is here!

    - by Cheryl
    Our training development team has been busy updating courses to keep pace with the new release of CRM On Demand. Release 17 is here! And I heard recently that it's one of our biggest releases ever. A lot of new features and functionality for you to take advantage of - too much for me to cover in this blog post. But, I thought I'd tell you about a few of my favorites - be sure to take a look at the What's New in Release 17 recording to see the full list, though...because I'm only going to touch on a few. Create your own look - okay, I'm starting with the fun stuff. But, there is a new customizable themes feature so that you can change the look of the application; colors, logo, the shape of the tabs. And it's really easy. There's also a whole new library of ready-made themes for you to pick from if you just want to go with one of those. Use this new feature to match the look of your company logo and color scheme. Or blaze new trails. You can create the look for the whole company, or a different look for each CRM On Demand role. This might especially come in handy if you're using the Partner Relationship Management (PRM) capabilities of CRM On Demand - you can create themes for your partner-facing roles to provide branded partner portals. Speaking of PRM - there are enhancements in this release to help companies better manage their partner relationships. A new Deal Registration object, which is separate from the Opportunity record, and better Special Pricing Request and Marketing Development Fund Request processes, give a lot more flexibility in how companies can build and manage their relationships with partners. Some new options for Forecasts in in Release 17, too. You can now have more than one type of forecast generated each forecast period. For example, you might need to see a forecast of the total opportunity revenue for your sales team, as well as on that breaks down revenue by product. The forecast definition now lets you do that. Other options allow you to make submitting forecasts easier, split opportunity revenue across the team and forecast that split appropriately. And - look for the new Forecast subject area in Answers, for building custom forecast reports. Ever wish you could use Workflow Rules to automatically reassign leads if they haven't been followed up on...or to email a manager if the status of a service request isn't changed after a specified period of time? Then check out the new Wait action for workflows. I think you'll be happy. Ok, enough for today. There is a lot to Release 17 that I didn't mention - a lot has been added for our Life Science industry edition, some new data visibility options, a new Data Loader tool, and more. Stay tuned for more blog posts about these and other Release 17 features in the coming weeks. In the meantime, don't forget about all of the resources we have for you to learn more (see my Learning About Release 17 blog post for details).

    Read the article

  • SPARC T4-2 Produces World Record Oracle Essbase Aggregate Storage Benchmark Result

    - by Brian
    Significance of Results Oracle's SPARC T4-2 server configured with a Sun Storage F5100 Flash Array and running Oracle Solaris 10 with Oracle Database 11g has achieved exceptional performance for the Oracle Essbase Aggregate Storage Option benchmark. The benchmark has upwards of 1 billion records, 15 dimensions and millions of members. Oracle Essbase is a multi-dimensional online analytical processing (OLAP) server and is well-suited to work well with SPARC T4 servers. The SPARC T4-2 server (2 cpus) running Oracle Essbase 11.1.2.2.100 outperformed the previous published results on Oracle's SPARC Enterprise M5000 server (4 cpus) with Oracle Essbase 11.1.1.3 on Oracle Solaris 10 by 80%, 32% and 2x performance improvement on Data Loading, Default Aggregation and Usage Based Aggregation, respectively. The SPARC T4-2 server with Sun Storage F5100 Flash Array and Oracle Essbase running on Oracle Solaris 10 achieves sub-second query response times for 20,000 users in a 15 dimension database. The SPARC T4-2 server configured with Oracle Essbase was able to aggregate and store values in the database for a 15 dimension cube in 398 minutes with 16 threads and in 484 minutes with 8 threads. The Sun Storage F5100 Flash Array provides more than a 20% improvement out-of-the-box compared to a mid-size fiber channel disk array for default aggregation and user-based aggregation. The Sun Storage F5100 Flash Array with Oracle Essbase provides the best combination for large Oracle Essbase databases leveraging Oracle Solaris ZFS and taking advantage of high bandwidth for faster load and aggregation. Oracle Fusion Middleware provides a family of complete, integrated, hot pluggable and best-of-breed products known for enabling enterprise customers to create and run agile and intelligent business applications. Oracle Essbase's performance demonstrates why so many customers rely on Oracle Fusion Middleware as their foundation for innovation. Performance Landscape System Data Size(millions of items) Database Load(minutes) Default Aggregation(minutes) Usage Based Aggregation(minutes) SPARC T4-2, 2 x SPARC T4 2.85 GHz 1000 149 398* 55 Sun M5000, 4 x SPARC64 VII 2.53 GHz 1000 269 526 115 Sun M5000, 4 x SPARC64 VII 2.4 GHz 400 120 448 18 * – 398 mins with CALCPARALLEL set to 16; 484 mins with CALCPARALLEL threads set to 8 Configuration Summary Hardware Configuration: 1 x SPARC T4-2 2 x 2.85 GHz SPARC T4 processors 128 GB memory 2 x 300 GB 10000 RPM SAS internal disks Storage Configuration: 1 x Sun Storage F5100 Flash Array 40 x 24 GB flash modules SAS HBA with 2 SAS channels Data Storage Scheme Striped - RAID 0 Oracle Solaris ZFS Software Configuration: Oracle Solaris 10 8/11 Installer V 11.1.2.2.100 Oracle Essbase Client v 11.1.2.2.100 Oracle Essbase v 11.1.2.2.100 Oracle Essbase Administration services 64-bit Oracle Database 11g Release 2 (11.2.0.3) HP's Mercury Interactive QuickTest Professional 9.5.0 Benchmark Description The objective of the Oracle Essbase Aggregate Storage Option benchmark is to showcase the ability of Oracle Essbase to scale in terms of user population and data volume for large enterprise deployments. Typical administrative and end-user operations for OLAP applications were simulated to produce benchmark results. The benchmark test results include: Database Load: Time elapsed to build a database including outline and data load. Default Aggregation: Time elapsed to build aggregation. User Based Aggregation: Time elapsed of the aggregate views proposed as a result of tracked retrieval queries. Summary of the data used for this benchmark: 40 flat files, each of size 1.2 GB, 49.4 GB in total 10 million rows per file, 1 billion rows total 28 columns of data per row Database outline has 15 dimensions (five of them are attribute dimensions) Customer dimension has 13.3 million members 3 rule files Key Points and Best Practices The Sun Storage F5100 Flash Array has been used to accelerate the application performance. Setting data load threads (DLTHREADSPREPARE) to 64 and Load Buffer to 6 improved dataloading by about 9%. Factors influencing aggregation materialization performance are "Aggregate Storage Cache" and "Number of Threads" (CALCPARALLEL) for parallel view materialization. The optimal values for this workload on the SPARC T4-2 server were: Aggregate Storage Cache: 32 GB CALCPARALLEL: 16   See Also Oracle Essbase Aggregate Storage Option Benchmark on Oracle's SPARC T4-2 Server oracle.com Oracle Essbase oracle.com OTN SPARC T4-2 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Copyright 2012, Oracle and/or its affiliates. All rights reserved. Oracle and Java are registered trademarks of Oracle and/or its affiliates. Other names may be trademarks of their respective owners. Results as of 28 August 2012.

    Read the article

  • Webcast Q&A: Demystifying External Authorization

    - by B Shashikumar
    Thanks to everyone who joined us on our webcast with SANS Institute on "Demystifying External Authorization". Also a special thanks to Tanya Baccam from SANS for sharing her experiences reviewing Oracle Entitlements Server. If you missed the webcast, you can catch a replay of the webcast here.  Here is a compilation of the slides that were used on today's webcast.  SANS Institute Product Review: Oracle Entitlements Server We have captured the Q&A from the webcast for those who couldn't attend. Q: Is Oracle ADF integrated with Oracle Entitlements Server (OES) ? A:  In Oracle Fusion Middleware 11g and later, Oracle ADF, Oracle WebCenter, Oracle SOA Suite and other middleware products are all built on Oracle Platform Security Services (OPSS). OPSS privodes many security functions like authentication, audit, credential stores, token validaiton, etc. OES is the authorization solution underlying OPSS. And OES 11g unifies different authorization mechanisms including Java2/ABAC/RBAC.  Q: Which portal frameworks support the use of OES policies for portal entitlement decisions? A:  Many portals including Oracle WebCenter 11g  run natively on top of OES. The authorization engine in WebCenter is OES. Besides, OES offers out of the box integration with Microsoft SharePoint. So SharePoint sites, sub sites, web parts, navigation items, document access control can all be secured with OES. Several other portals have also been secured with OES ex: IBM websphere portal Q:  How do we enforce Seperation of Duties (SoD) rules using OES (also how does that integrate with a product like OIA) ? A:  A product like OIM or OIA can be used to set up and govern SoD policies. OES enforces these policies at run time. Role mapping policies in OES can assign roles dynamically to users under certain conditions. So this makes it simple to enforce SoD policies inside an application at runtime. Q:  Our web application has objects like buttons, text fields, drop down lists etc. is there any ”autodiscovery” capability that allows me to use/see those web page objects so you can start building policies over those objects? or how does it work? A:  There ae few different options with OES. When you build an app, and make authorization calls with the app in the test environment, you can put OES in discovery mode and have OES register those authorization calls and decisions. Instead of doing  this after the fact, an application like Oracle iFlex has built-in UI controls where when the app is running, a script can intercept authorization calls and migrate those over to OES. And in Oracle ADF, a lot of resources are protected so pages, task flows and other resources be registered without OES knowing about them. Q: Does current Oracle Fusion application use OES ? The documentation does not seem to indicate it. A:  The current version of Fusion Apps is using a preview version of OES. Soon it will be repalced with OES 11g.  Q: Can OES secure mobile apps? A: Absolutely. Nowadays users are bringing their own devices such as a a smartphone or tablet to work. With the Oracle IDM platform, we can tie identity context into the access management stack. With OES we can make use of context to enforce authorization for users accessing apps from mobile devices. For example: we can take into account different elements like authentication scheme, location, device type etc and tie all that information into an authorization decision.  Q:  Does Oracle Entitlements Server (OES) have an ESAPI implementation? A:  OES is an authorization solution. ESAPI/OWASP is something we include in our platform security solution for all oracle products, not specifically in OES Q:  ESAPI has an authorization API. Can I use that API to access OES? A:  If the API supports an interface / sspi model that can be configured to invoke an external authz system through some mechanism then yes

    Read the article

  • Tyrus 1.8

    - by Pavel Bucek
    Another version of Tyrus, the reference implementation of JSR 356 – Java API for WebSocket is out! Complete list of fixes and features is below, but let me describe some of the new features in more detail. All information presented here is also available in Tyrusdocumentation. What’s new? First to mention is that JSR 356 Maintenance review Ballot is over and the change proposed for 1.1 release was accepted. More details about changes in the API can be found in this article. Important part is that Tyrus 1.8 implements this API, meaning you can use Lambda expressions and some features of Nashorn without the need for any workarounds. Almost all other features are related to client side support, which was significantly improved in this release. Firstly – I have to admit, that Tyrus client contained security issue – SSL Hostname verification was not performed when connecting to “wss” endpoints. This was fixed as part of TYRUS-339 and resulted in some changes in the client configuration API. Now you can control whether HostnameVerification should be performed (SslEngineConfigurator#setHostnameVerificationEnabled(boolean)) or even set your own HostnameVerifier (please use carefully): #setHostnameVerifier(…). Detailed description can be found in Host verification chapter. Another related enhancement is support for Http Basic and Digest authentication schemes. Tyrus client now enables users to provide credentials and underlying implementation will take care of everything else. Our implementation is strictly non pre-emptive, so the login information is sent always as a response to 401 Http Status Code. If the Basic and Digest are not good enough and there is a need to use some custom scheme or something which is not yet supported in Tyrus, custom Authenticator can be registered and the authentication part of the handshake process will be handled by it. Please seeClient HTTP Authentication chapter in the user guide for more details. There are other features, like fine-grain threadpool configuration for JDK client container, build-in Http redirect support and some reshuffling related to unifying the location of client configuration classes and properties definition – every property should be now part of ClientProperties class. All new features are described in the user guide – in chapterTyrus proprietary configuration. Update – Tyrus 1.8.1 There was another slightly late reported issue related to running in environments with SecurityManager enabled, so this version fixes that. Another noteworthy fixes are TYRUS-355 and TYRUS-361; the first one is about incorrect thread factory used for shared container timeout, which resulted in JVM waiting for that thread and not exiting as it should. The other issue enables relative URIs in Location header when using redirect feature. Links Tyrus homepage mailing list JIRA Complete list of changes: Bug [TYRUS-333] – Multiple endpoints on one client [TYRUS-334] – When connection is closed by a peer, periodic heartbeat pong is not stopped [TYRUS-336] – ReaderBuffer.getNextChars() keeps blocking a server thread after client has closed the session [TYRUS-338] – JDK client SSL filter needs better synchronization during handshake phase [TYRUS-339] – SSL hostname verification is missing [TYRUS-340] – Test PathParamTest are not stable with JDK client [TYRUS-341] – A control frame inside a stream of continuation frames is treated as the part of the stream [TYRUS-343] – ControlFrameInDataStreamTest does not pass on GF [TYRUS-345] – NPE is thrown, when shared container timeout property in JDK client is not set [TYRUS-346] – IllegalStateException is thrown, when using proxy in JDK client [TYRUS-347] – Introduce better synchronization in JDK client thread pool [TYRUS-348] – When a client and server close connection simultaneously, JDK client throws NPE [TYRUS-356] – Tyrus cannot determine the connection port for a wss URL [TYRUS-357] – Exception thrown in MessageHandler#OnMessage is not caught in @OnError method [TYRUS-359] – Client based on Java 7 Asynchronous IO makes application unexitable Improvement [TYRUS-328] – JDK 1.7 AIO Client container – threads – (setting threadpool, limits, …) [TYRUS-332] – Consolidate shared client properties into one file. [TYRUS-337] – Create an SSL version of Basic Servlet test New Feature [TYRUS-228] – Add client support for HTTP Basic/Digest Task [TYRUS-330] – create/run tests/servlet/basic via wss [TYRUS-335] – [clustering] – introduce RemoteSession and expose them via separate method (not include remote sessions in the getOpenSessions()) [TYRUS-344] – Introduce Client support for HTTP Redirect

    Read the article

  • Moving monarchs and dragons: migrating the JDK bugs to JIRA

    - by darcy
    Among insects, monarch butterflies and dragonflies have the longest migrations; migrating JDK bugs involves a long journey as well! As previously announced by Mark back in March, we've been working according to a revised plan to transition the JDK bug management from Sun's legacy system to initially an Oracle-internal JIRA instance which is afterward made visible and usable externally. I've been busily working on this project for the last few months and the team has made good progress on many aspects of the effort: JDK bugs will be imported into JIRA regardless of age; bugs will also be imported regardless of state, including closed bugs. Consequently, the JDK bug project will start pre-populated with over 100,000 existing bugs, some dating all the way back to 1994. This will allow a continuity of information and allow new issues to be linked to old ones. Using a custom import process, the Sun bug numbers will be preserved in JIRA. For example, the Sun bug with bug number 4040458 will become "JDK-4040458" in JIRA. In JIRA the project name, "JDK" in our case, is part of the bug's identifier. Bugs created after the JIRA migration will be numbered starting at 8000000; bugs imported from the legacy system have numbers ranging between 1000000 and 79999999. We're working with the bugs.sun.com team to try to maintain continuity of the ability to both read JDK bug information as well as to file new incidents. At least for now, the overall architecture of bugs.sun.com will be the same as it is today: it will be a gateway bridging to an Oracle-internal system, but the internal system will change to JIRA from the legacy database. Generally we are aiming to preserve the visibility of bugs currently viewable on bugs.sun.com; however, bugs in areas not related to the JDK will not be visible after the transition to JIRA. New incoming incidents will be sent to a separate JIRA project for initial triage before possibly being moved into the JDK project. JDK bug management leans heavily on being able to track the state of bugs in multiple releases, especially to coordinate delivering synchronized security releases (known as CPUs, critital patch updates, in Oracle parlance). For a security release, it is common for half a dozen or more release trains to be affected (for example, JDK 5, JDK 6 update, OpenJDK 6, JDK 7 update, JDK 8, virtual releases for HotSpot express, etc.). We've determined we need to track at least the tuple of (release, responsible engineer/assignee for the release, status in the release) for the release trains a fix is going into. To do this in JIRA, we are creating a separate port/backport issue type along with a custom link type to allow the multiple release information to be easily grouped and presented together. The Sun legacy system had a three-level classification scheme, product, category, and subcategory. Out of the box, JIRA only has a one-level classification, component. We've implemented a custom second-level classification, subcomponent. As part of the bug migration we've taken the opportunity to think about how bugs should be grouped under a two-level system and we'll the new system will be simpler and more regular. The main top-level components of the JDK product will include: core-libs client-libs deploy install security-libs other-libs tools hotspot For the libs areas, the primary name of the subcomportment will be the package of the API in question. In the core-libs component, there will be subcomponents like: java.lang java.lang.class_loading java.math java.util java.util:i18n In the tools component, subcomponents will primarily correspond to command names in $JDK/bin like, jar, javac, and javap. The first several bulk imports of the JDK bugs into JIRA have gone well and we're continuing to refine the import to have greater fidelity to the current data, including by reconstructing information not brought over in a structured fashion during the previous large JDK bug system migration back in 2004. We don't currently have a firm timeline of when the new system will be usable externally, but as it becomes available, I'll share further information in follow-up blog posts.

    Read the article

  • Don't Forget To Enjoy Life

    - by Justin
    I have a pretty clear stance on posting personal information in my blogs. I tend to avoid it almost instinctively. Part of that is because I am a somewhat private person. And the other is because I know how easy it is for personal information to be gathered and collected from sources such as blogs. So, this has remained a tech only blog for me. I've only posted topics mostly related to issues I have encountered at work. In a way this blog is a 'bookmark' for me. If I post something here and run into the issue again it allows me to refer back to a convenient place where the 'fix' is documented in a way that I understand. But today, I am posting something that speaks to everyone. Something PERSONAL. Honestly, I expect this entry to receive zero views. But if nothing else, I can come back to this blog one day when I'm having a bad day or something and run across this post. And I will be reminded... DON'T FORGET TO ENJOY LIFE. Say this to yourself out loud, right now. People, we can get caught up in some rather mundane details as we trek through life. It's so easy to lose track of what really matters that it should be no surprise to find yourself reading something like this and thinking to yourself 'Yeah. You are right, man. Some of this crap I'm clinging on to right now is so small in the grand scheme of things'. I have no reservation, no shame, in saying that I am more often than not caught up in the ever evolving world of 'shit that does not matter'. When you work in technology, you are surrounded by deadlines, upgrades, new versions, support 'end of life', etc. And by time you get done with your 8 hours you go home and put in a few more because you are STILL CAUGHT UP in the things you dealt with at work all day. DO YOURSELF A FAVOR. DO YOUR FAMILY AND FRIEND A FAVOR. When you are done for the day, and you drive home, get those work-related things out of your head before you pull into the driveway. If you are still thinking on them when you park the car, leave the engine running, close your eyes and take a deep breath. If you believe in God, pray. If you don't then meditate for a second with the INTENTION of letting go of the day and becoming the 'real you'. You may have forgotten who the real you is so I'll remind you.... THE REAL YOU IS THAT GUY OR GAL THAT LAUGHS, LOVES, AND LIVES. Be the real you as often as possible. If you can't do it during your 9 - 5, do it at home. YOUR RELATIONSHIPS AND YOUR PERSONAL HAPPINESS DEPEND ON IT. I am going to make you a promise right now. If you do what I've just said, your days will be longer and your joy will be exponential. I can't explain why I know this to be true. But I do know it. And if you are there reading this right now, you know it is true too. We both know it is true because it COMES FROM WITHIN EVERY MAN, WOMAN and CHILD. We are born into love and happiness. Lets not fade away into the darkness so easily found in this world. Lets keep the flame burning. The flame of passion. Passion for LIFE. Peace be with you.

    Read the article

  • Package Version Numbers, why are they so important

    - by Chris W Beal
    One of the design goals of IPS has been to allow people to easily move forward to a supported "Surface" of component. That is to say, when you  # pkg update your system, you get the latest set of components which all work together, based on the packages you already have installed. During development, this has meant simply you update to the latest "build" of the components. (During development, we build everything and publish everything every two weeks). Now we've released Solaris 11 using the IPS technologies, things are a bit more complicated. We need to be able to reflect all the types of Solaris release we are doing. For example Solaris Development builds, Solaris Update builds and "Support Repository Updates" (the replacement for patches) in the version scheme. So simply saying "151" as the build number isn't sufficient to articulate what you are running, or indeed what is available to update to In my previous blog post I talked about creating your own package, and gave an example FMRI of pkg://tools/[email protected],0.5.11-0.0.0 But it's probably more instructive to look at the FMRI of a Solaris package. The package "core-os" contains all the common utilities and daemons you need to use Solaris.  $ pkg info core-os Name: system/core-os Summary: Core Solaris Description: Operating system core utilities, daemons, and configuration files. Category: System/Core State: Installed Publisher: solaris Version: 0.5.11 Build Release: 5.11 Branch: 0.175.0.0.0.2.1 Packaging Date: Wed Oct 19 07:04:57 2011 Size: 25.14 MB FMRI: pkg://solaris/system/[email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z The FMRI is what we will concentrate on here. In this package "solaris" is the publisher. You can use the pkg publisher command to see where the solaris publisher gets it's bits from $ pkg publisher PUBLISHER TYPE STATUS URI solaris origin online http://pkg.oracle.com/solaris/release/ So we can see we get solaris packages from pkg.oracle.com.  The package name is system/core-os. These can be arbitrary length, just to allow you to group similar packages together. Now on the the interesting? bit, the versions, everything after the @ is part of the version. IPS will only upgrade to a "higher" version. [email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z core-os = Package Name0.5.11 = Component - in this case we're saying it's a SunOS 5.11 package, = separator5.11 = Built on version - to indicate what OS version you built the package on- = another separator0.175.0.0.0.2.1 = Branch Version : = yet another separator20111019T070457Z = Time stamp when the package was published So from that we can see the Branch Version seems rather complex. It is necessarily so, to allow us to describe the hierachy of releases we do In this example we see the following 0.175: is known as the trunkid, and is incremented each build of a new release of Solaris. During Solaris 11 this should not change  0: is the Update release for Solaris. 0 for FCS, 1 for update 1 etc 0: is the SRU for Solaris. 0 for FCS, 1 for SRU 1 etc 0: is reserved for future use 2: Build number of the SRU 1: Nightly ID - only important for Solaris developersTake a hypothetical example [email protected],5.11-0.175.1.5.0.4.1:<something> This would be build 4 of SRU 5 of Update 1 of Solaris 11 This is actually documented in a MOS article 1378134.1 Which you can read if you have a support contract.

    Read the article

  • Fun with Python

    - by dotneteer
    I am taking a class on Coursera recently. My formal education is in physics. Although I have been working as a developer for over 18 years and have learnt a lot of programming on the job, I still would like to gain some systematic knowledge in computer science. Coursera courses taught by Standard professors provided me a wonderful chance. The three languages recommended for assignments are Java, C and Python. I am fluent in Java and have done some projects using C++/MFC/ATL in the past, but I would like to try something different this time. I first started with pure C. Soon I discover that I have to write a lot of code outside the question that I try to solve because the very limited C standard library. For example, to read a list of values from a file, I have to read characters by characters until I hit a delimiter. If I need a list that can grow, I have to create a data structure myself, something that I have taking for granted in .Net or Java. Out of frustration, I switched to Python. I was pleasantly surprised to find that Python is very easy to learn. The tutorial on the official Python site has the exactly the right pace for me, someone with experience in another programming. After a couple of hours on the tutorial and a few more minutes of toying with IDEL, I was in business. I like the “battery supplied” philosophy that gives everything that I need out of box. For someone from C# or Java background, curly braces are replaced by colon(:) and tab spaces. Although I tend to miss colon from time to time, I found that the idea of tab space is actually very nice once I get use to them. I also like to feature of multiple assignment and multiple return parameters. When I need to return a by-product, I just add it to the list of returns. When would use Python? I would use Python if I need to computer anything quick. The language is very easy to use. Python has a good collection of libraries (packages). The REPL of the interpreter allows me test ideas quickly before committing them into script. Lots of computer science work have been ported from Lisp to Python. Some universities are even teaching SICP in Python. When wouldn’t I use Python? I mostly would not use it in a managed environment, such as Ironpython or Jython. Both .Net and Java already have a rich library so one has to make a choice which library to use. If we use the managed runtime library, the code will tie to the particular runtime and thus not portable. If we use the Python library, then we will face the relatively long start-up time. For this reason, I would not recommend to use Ironpython for WP7 development. The only situation that I see merit with managed Python is in a server application where I can preload Python so that the start-up time is not a concern. Using Python as a managed glue language is an over-kill most of the time. A managed Scheme could be a better glue language as it is small enough to start-up very fast.

    Read the article

  • Oh no! My padding's invalid!

    - by Simon Cooper
    Recently, I've been doing some work involving cryptography, and encountered the standard .NET CryptographicException: 'Padding is invalid and cannot be removed.' Searching on StackOverflow produces 57 questions concerning this exception; it's a very common problem encountered. So I decided to have a closer look. To test this, I created a simple project that decrypts and encrypts a byte array: // create some random data byte[] data = new byte[100]; new Random().NextBytes(data); // use the Rijndael symmetric algorithm RijndaelManaged rij = new RijndaelManaged(); byte[] encrypted; // encrypt the data using a CryptoStream using (var encryptor = rij.CreateEncryptor()) using (MemoryStream encryptedStream = new MemoryStream()) using (CryptoStream crypto = new CryptoStream( encryptedStream, encryptor, CryptoStreamMode.Write)) { crypto.Write(data, 0, data.Length); encrypted = encryptedStream.ToArray(); } byte[] decrypted; // and decrypt it again using (var decryptor = rij.CreateDecryptor()) using (CryptoStream crypto = new CryptoStream( new MemoryStream(encrypted), decryptor, CryptoStreamMode.Read)) { byte[] decrypted = new byte[data.Length]; crypto.Read(decrypted, 0, decrypted.Length); } Sure enough, I got exactly the same CryptographicException when trying to decrypt the data even in this simple example. Well, I'm obviously missing something, if I can't even get this single method right! What does the exception message actually mean? What am I missing? Well, after playing around a bit, I discovered the problem was fixed by changing the encryption step to this: // encrypt the data using a CryptoStream using (var encryptor = rij.CreateEncryptor()) using (MemoryStream encryptedStream = new MemoryStream()) { using (CryptoStream crypto = new CryptoStream( encryptedStream, encryptor, CryptoStreamMode.Write)) { crypto.Write(data, 0, data.Length); } encrypted = encryptedStream.ToArray(); } Aaaah, so that's what the problem was. The CryptoStream wasn't flushing all it's data to the MemoryStream before it was being read, and closing the stream causes it to flush everything to the backing stream. But why does this cause an error in padding? Cryptographic padding All symmetric encryption algorithms (of which Rijndael is one) operates on fixed block sizes. For Rijndael, the default block size is 16 bytes. This means the input needs to be a multiple of 16 bytes long. If it isn't, then the input is padded to 16 bytes using one of the padding modes. This is only done to the final block of data to be encrypted. CryptoStream has a special method to flush this final block of data - FlushFinalBlock. Calling Stream.Flush() does not flush the final block, as you might expect. Only by closing the stream or explicitly calling FlushFinalBlock is the final block, with any padding, encrypted and written to the backing stream. Without this call, the encrypted data is 16 bytes shorter than it should be. If this final block wasn't written, then the decryption gets to the final 16 bytes of the encrypted data and tries to decrypt it as the final block with padding. The end bytes don't match the padding scheme it's been told to use, therefore it throws an exception stating what is wrong - what the decryptor expects to be padding actually isn't, and so can't be removed from the stream. So, as well as closing the stream before reading the result, an alternative fix to my encryption code is the following: // encrypt the data using a CryptoStream using (var encryptor = rij.CreateEncryptor()) using (MemoryStream encryptedStream = new MemoryStream()) using (CryptoStream crypto = new CryptoStream( encryptedStream, encryptor, CryptoStreamMode.Write)) { crypto.Write(data, 0, data.Length); // explicitly flush the final block of data crypto.FlushFinalBlock(); encrypted = encryptedStream.ToArray(); } Conclusion So, if your padding is invalid, make sure that you close or call FlushFinalBlock on any CryptoStream performing encryption before you access the encrypted data. Flush isn't enough. Only then will the final block be present in the encrypted data, allowing it to be decrypted successfully.

    Read the article

  • methods DSA_do_verify and SHA1 (OpenSSL library for Windows)

    - by Rei
    i am working on a program to authenticate an ENC signature file by using OpenSSL for windows, and specifically methods DSA_do_verify(...) and SHA1(...) hash algorithm, but is having problems as the result from DSA_do_verify is always 0 (invalid). I am using the signature file of test set 4B from the IHO S-63 Data Protection Scheme, and also the SA public key (downloadable from IHO) for verification. Below is my program, can anyone help to see where i have gone wrong as i have tried many ways but failed to get the verification to be valid, thanks.. The signature file from test set 4B // Signature part R: 3F14 52CD AEC5 05B6 241A 02C7 614A D149 E7D6 C408. // Signature part S: 44BB A3DB 8C46 8D11 B6DB 23BE 1A79 55E6 B083 7429. // Signature part R: 93F5 EF86 1FF6 BA6F 1C2B B9BB 7F36 0C80 2F9B 2414. // Signature part S: 4877 8130 12B4 50D8 3688 B52C 7A84 8E26 D442 8B6E. // BIG p C16C BAD3 4D47 5EC5 3966 95D6 94BC 8BC4 7E59 8E23 B5A9 D7C5 CEC8 2D65 B682 7D44 E953 7848 4730 C0BF F1F4 CB56 F47C 6E51 054B E892 00F3 0D43 DC4F EF96 24D4 665B. // BIG q B7B8 10B5 8C09 34F6 4287 8F36 0B96 D7CC 26B5 3E4D. // BIG g 4C53 C726 BDBF BBA6 549D 7E73 1939 C6C9 3A86 9A27 C5DB 17BA 3CAC 589D 7B3E 003F A735 F290 CFD0 7A3E F10F 3515 5F1A 2EF7 0335 AF7B 6A52 11A1 1035 18FB A44E 9718. // BIG y 15F8 A502 11C2 34BB DF19 B3CD 25D1 4413 F03D CF38 6FFC 7357 BCEE 59E4 EBFD B641 6726 5E5F 0682 47D4 B50B 3B86 7A85 FB4D 6E01 8329 A993 C36C FD9A BFB6 ED6D 29E0. dataServer_pkeyfile.txt (extracted from above) // BIG p C16C BAD3 4D47 5EC5 3966 95D6 94BC 8BC4 7E59 8E23 B5A9 D7C5 CEC8 2D65 B682 7D44 E953 7848 4730 C0BF F1F4 CB56 F47C 6E51 054B E892 00F3 0D43 DC4F EF96 24D4 665B. // BIG q B7B8 10B5 8C09 34F6 4287 8F36 0B96 D7CC 26B5 3E4D. // BIG g 4C53 C726 BDBF BBA6 549D 7E73 1939 C6C9 3A86 9A27 C5DB 17BA 3CAC 589D 7B3E 003F A735 F290 CFD0 7A3E F10F 3515 5F1A 2EF7 0335 AF7B 6A52 11A1 1035 18FB A44E 9718. // BIG y 15F8 A502 11C2 34BB DF19 B3CD 25D1 4413 F03D CF38 6FFC 7357 BCEE 59E4 EBFD B641 6726 5E5F 0682 47D4 B50B 3B86 7A85 FB4D 6E01 8329 A993 C36C FD9A BFB6 ED6D 29E0. Program abstract: QbyteArray pk_data; QFile pk_file("./dataServer_pkeyfile.txt"); if (pk_file.open(QIODevice::Text | QIODevice::ReadOnly)) { pk_data.append(pk_file.readAll()); } pk_file.close(); unsigned char ptr_sha_hashed[20]; unsigned char *ptr_pk_data = (unsigned char *)pk_data.data(); // openssl SHA1 hashing algorithm SHA1(ptr_pk_data, pk_data.length(), ptr_sha_hashed); DSA_SIG *dsasig = DSA_SIG_new(); char ptr_r[] = "93F5EF861FF6BA6F1C2BB9BB7F360C802F9B2414"; //from tset 4B char ptr_s[] = "4877813012B450D83688B52C7A848E26D4428B6E"; //from tset 4B if (BN_hex2bn(&dsasig->r, ptr_r) == 0) return 0; if (BN_hex2bn(&dsasig->s, ptr_s) == 0) return 0; DSA *dsakeys = DSA_new(); //the following values are from the SA public key char ptr_p[] = "FCA682CE8E12CABA26EFCCF7110E526DB078B05EDECBCD1EB4A208F3AE1617AE01F35B91A47E6DF63413C5E12ED0899BCD132ACD50D99151BDC43EE737592E17"; char ptr_q[] = "962EDDCC369CBA8EBB260EE6B6A126D9346E38C5"; char ptr_g[] = "678471B27A9CF44EE91A49C5147DB1A9AAF244F05A434D6486931D2D14271B9E35030B71FD73DA179069B32E2935630E1C2062354D0DA20A6C416E50BE794CA4"; char ptr_y[] = "963F14E32BA5372928F24F15B0730C49D31B28E5C7641002564DB95995B15CF8800ED54E354867B82BB9597B158269E079F0C4F4926B17761CC89EB77C9B7EF8"; if (BN_hex2bn(&dsakeys->p, ptr_p) == 0) return 0; if (BN_hex2bn(&dsakeys->q, ptr_q) == 0) return 0; if (BN_hex2bn(&dsakeys->g, ptr_g) == 0) return 0; if (BN_hex2bn(&dsakeys->pub_key, ptr_y) == 0) return 0; int result; //valid = 1, invalid = 0, error = -1 result = DSA_do_verify(ptr_sha_hashed, 20, dsasig, dsakeys); //result is 0 (invalid)

    Read the article

  • Basic collision direction detection on 2d objects

    - by Osso Buko
    I am trying to develop a platform game for Android by using ANdroid GL Engine (ANGLE). And I am having trouble with collision detection. I have two objects which is shaped as rectangular. And no change in rotation. Here is a scheme of attributes of objects. What i am trying to do is when objects collide they block each other's movement on that direction. Every object has 4 boolean (bTop, bBottom, bRight, bLeft). For example when bBottom is true object can't advance on that direction. I came up with a solution but it seems it only works on one dimensional. Bottom and top or right and left. public void collisionPlatform (MyObject a, MyObject b) { // first obj is player and second is a wall or a platform Vector p1 = a.mPosition; // p1 = middle point of first object Vector d1 = a.mPosition2; // width(mX) and height of first object Vector mSpeed1 = a.mSpeed; // speed vector of first object Vector p2 = b.mPosition; // p1 = middle point of second object Vector d2 = b.mPosition2; // width(mX) and height of second object Vector mSpeed2 = b.mSpeed; // speed vector of second object float xDist, yDist; // distant between middle of two object float width , height; // this is average of two objects measurements width=(width1+width2)/2 xDist=(p1.mX - p2.mX); // calculate distance // if positive first object is at the right yDist=(p1.mY - p2.mY); // if positive first object is below width = d1.mX + d2.mX; // average measurements calculate height = d1.mY + d2.mY; width/=2; height/=2; if (Math.abs(xDist) < width && Math.abs(yDist) < height) { // Two object is collided if(p1.mY>p2.mY) { // first object is below second one a.bTop = true; if(a.mSpeed.mY<0) a.mSpeed.mY=0; b.bBottom = true; if(b.mSpeed.mY>0) b.mSpeed.mY=0; } else { a.bBottom = true; if(a.mSpeed.mY>0) a.mSpeed.mY=0; b.bTop = true; if(b.mSpeed.mY<0) b.mSpeed.mY=0; } } As seen in my code it simply will not work. when object comes from right or left it doesn't work. I tried couple of ways other than this one but none worked. I am guessing right method will include mSpeed vector. But I have no idea how to do it. I really appreciate if you could help. Sorry for my bad english.

    Read the article

  • Where to place web.xml outside WAR file for secure redirect?

    - by Silverhalide
    I am running Tomcat 7 and am deploying a bunch of applications delivered to me by a third party as WAR files. I'd like to force some of those apps to always use SSL. (All the "SSL" apps are in one service; other apps outside this discussion are in another service.) I've figured out how to use conf\web.xml to redirect apps from HTTP to HTTPS, but that applies to all applications hosted by Tomcat. I've also figured out how to put web.xml in an unpacked app's web-inf directory; that does the trick for that specific app, but runs the risk of being overwritten if our vendor gives us a new war file to deploy. I've also tried placing the web.xml file in various places under conf\service\host, or under appbase, but none seem to work. Is it possible to redirect some apps to SSL without forcing all apps to redirect, or to put the web.xml file inside the extracted WAR file? Here's my server.xml: <Service name="secure"> <Connector port="80" connectionTimeout="20000" redirectPort="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css"/> <Connector port="443" URIEncoding="UTF-8" enableLookups="false" compression="on" protocol="org.apache.coyote.http11.Http11Protocol" compressableMimeType="text/html,text/xml,text/plain,text/javascript,application/json,text/css" scheme="https" secure="true" SSLEnabled="true" sslProtocol="TLS" keystoreFile="..." keystorePass="..." keystoreType="PKCS12" truststoreFile="..." truststorePass="..." truststoreType="JKS" clientAuth="false" ciphers="SSL_RSA_WITH_RC4_128_MD5,SSL_RSA_WITH_RC4_128_SHA,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_RSA_WITH_AES_128_CBC_SHA,TLS_DHE_DSS_WITH_AES_128_CBC_SHA,SSL_RSA_WITH_AES_128_CBC_SHA"/> <Engine name="secure" defaultHost="localhost"> <Realm className="org.apache.catalina.realm.UserDatabaseRealm" resourceName="UserDatabase"/> <Host name="localhost" appBase="webapps" unpackWARs="false" autoDeploy="true" xmlValidation="false" xmlNamespaceAware="false"> </Host> </Engine> </Service> <Service name="mutual-secure"> ... </Service> The content of the web.xml files I'm playing with is: <web-app xmlns="http://java.sun.com/xml/ns/javaee" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_3_0.xsd" version="3.0" metadata-complete="true"> <security-constraint> <web-resource-collection> <web-resource-name>All applications</web-resource-name> <url-pattern>/*</url-pattern> </web-resource-collection> <user-data-constraint> <description>Redirect all requests to HTTPS</description> <transport-guarantee>CONFIDENTIAL</transport-guarantee> </user-data-constraint> </security-constraint> </web-app> (For conf\web.xml the security-constraint is added just before the end of the existing file, rather than create a new file.) My webapps directory (currently) contains only the WAR files.

    Read the article

  • I thought the new AUTO_SAMPLE_SIZE in Oracle Database 11g looked at all the rows in a table so why do I see a very small sample size on some tables?

    - by Maria Colgan
    I recently got asked this question and thought it was worth a quick blog post to explain in a little more detail what is going on with the new AUTO_SAMPLE_SIZE in Oracle Database 11g and what you should expect to see in the dictionary views. Let’s take the SH.CUSTOMERS table as an example.  There are 55,500 rows in the SH.CUSTOMERS tables. If we gather statistics on the SH.CUSTOMERS using the new AUTO_SAMPLE_SIZE but without collecting histogram we can check what sample size was used by looking in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views. The sample sized shown in the USER_TABLES is 55,500 rows or the entire table as expected. In USER_TAB_COL_STATISTICS most columns show 55,500 rows as the sample size except for four columns (CUST_SRC_ID, CUST_EFF_TO, CUST_MARTIAL_STATUS, CUST_INCOME_LEVEL ). The CUST_SRC_ID and CUST_EFF_TO columns have no sample size listed because there are only NULL values in these columns and the statistics gathering procedure skips NULL values. The CUST_MARTIAL_STATUS (38,072) and the CUST_INCOME_LEVEL (55,459) columns show less than 55,500 rows as their sample size because of the presence of NULL values in these columns. In the SH.CUSTOMERS table 17,428 rows have a NULL as the value for CUST_MARTIAL_STATUS column (17428+38072 = 55500), while 41 rows have a NULL values for the CUST_INCOME_LEVEL column (41+55459 = 55500). So we can confirm that the new AUTO_SAMPLE_SIZE algorithm will use all non-NULL values when gathering basic table and column level statistics. Now we have clear understanding of what sample size to expect lets include histogram creation as part of the statistics gathering. Again we can look in the USER_TABLES and USER_TAB_COL_STATISTICS dictionary views to find the sample size used. The sample size seen in USER_TABLES is 55,500 rows but if we look at the column statistics we see that it is same as in previous case except  for columns  CUST_POSTAL_CODE and  CUST_CITY_ID. You will also notice that these columns now have histograms created on them. The sample size shown for these columns is not the sample size used to gather the basic column statistics. AUTO_SAMPLE_SIZE still uses all the rows in the table - the NULL rows to gather the basic column statistics (55,500 rows in this case). The size shown is the sample size used to create the histogram on the column. When we create a histogram we try to build it on a sample that has approximately 5,500 non-null values for the column.  Typically all of the histograms required for a table are built from the same sample. In our example the histograms created on CUST_POSTAL_CODE and the CUST_CITY_ID were built on a single sample of ~5,500 (5,450 rows) as these columns contained only non-null values. However, if one or more of the columns that requires a histogram has null values then the sample size maybe increased in order to achieve a sample of 5,500 non-null values for those columns. n addition, if the difference between the number of nulls in the columns varies greatly, we may create multiple samples, one for the columns that have a low number of null values and one for the columns with a high number of null values.  This scheme enables us to get close to 5,500 non-null values for each column. +Maria Colgan

    Read the article

  • HintPath vs ReferencePath in Visual Studio

    - by toasteroven
    What exactly is the difference between the HintPath in a .csproj file and the ReferencePath in a .csproj.user file? We're trying to commit to a convention where dependency DLLs are in a "releases" svn repo and all projects point to a particular release. Since different developers have different folder structures, relative references won't work, so we came up with a scheme to use an environment variable pointing to the particular developer's releases folder to create an absolute reference. So after a reference is added, we manually edit the project file to change the reference to an absolute path using the environment variable. I've noticed that this can be done with both the HintPath and the ReferencePath, but the only difference I could find between them is that HintPath is resolved at build-time and ReferencePath when the project is loaded into the IDE. I'm not really sure what the ramifications of that are though. I have noticed that VS sometimes rewrites the .csproj.user and I have to rewrite the ReferencePath, but I'm not sure what triggers that. I've heard that it's best not to check in the .csproj.user file since it's user-specific, so I'd like to aim for that, but I've also heard that the HintPath-specified DLL isn't "guaranteed" to be loaded if the same DLL is e.g. located in the project's output directory. Any thoughts on this?

    Read the article

  • Is this a good implementation of DefaultHttpClient and ThreadSafeClientConnManager in Android?

    - by johnrock
    In my Android app I am sharing one httpclient for all activities/threads. All requests are made by callling getHttpClient().execute(httpget) or getHttpClient().execute(httppost). Is this implementation complete/correct and safe for multiple threads? Is there anything else missing i.e. Do I have to worry about releasing connections at all? private static HttpClient httpclient ; public static HttpClient getHttpClient() { if(httpclient == null){ return getHttpClientNew(); } else{ return httpclient; } } public static synchronized HttpClient getHttpClientNew() { HttpParams params = new BasicHttpParams(); ConnManagerParams.setMaxTotalConnections(params, 100); HttpProtocolParams.setVersion(params, HttpVersion.HTTP_1_1); HttpProtocolParams.setContentCharset(params, "UTF_8"); HttpProtocolParams.setUseExpectContinue(params, false); HttpConnectionParams.setConnectionTimeout(params, 10000); HttpConnectionParams.setSoTimeout(params, 10000); SchemeRegistry schemeRegistry = new SchemeRegistry(); schemeRegistry.register(new Scheme("http", PlainSocketFactory.getSocketFactory(), 80)); ClientConnectionManager cm = new ThreadSafeClientConnManager(params, schemeRegistry); httpclient = new DefaultHttpClient(cm, params); return httpclient; } This is an example of how the httpclient is used: private void update() { HttpGet httpget = new HttpGet(URL); httpget.setHeader(USER_AGENT, userAgent); httpget.setHeader(CONTENT_TYPE, MGUtils.APP_XML); HttpResponse response; try { response = getHttpClient().execute(httpget); HttpEntity entity = response.getEntity(); if (entity != null) { // parse stuff } } catch (Exception e) { } }

    Read the article

  • App cannot start at all in Android 2.2 (Froyo)

    - by Roland Lim
    Dear fellow Android developers & Google Engineers, My app has been running okay until the recent Froyo update. After installing the Android 2.2 SDK, I can compile my code without any errors. However, when I run it, it just force closes: Here's the log: 05-23 10:15:13.463: DEBUG/AndroidRuntime(423): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 05-23 10:15:13.463: DEBUG/AndroidRuntime(423): CheckJNI is ON 05-23 10:15:14.193: DEBUG/AndroidRuntime(423): --- registering native functions --- 05-23 10:15:15.293: DEBUG/AndroidRuntime(423): Shutting down VM 05-23 10:15:15.303: DEBUG/dalvikvm(423): Debugger has detached; object registry had 1 entries 05-23 10:15:15.333: INFO/AndroidRuntime(423): NOTE: attach of thread 'Binder Thread #3' failed 05-23 10:15:16.003: DEBUG/AndroidRuntime(431): >>>>>>>>>>>>>> AndroidRuntime START <<<<<<<<<<<<<< 05-23 10:15:16.013: DEBUG/AndroidRuntime(431): CheckJNI is ON 05-23 10:15:16.273: DEBUG/AndroidRuntime(431): --- registering native functions --- 05-23 10:15:17.392: INFO/ActivityManager(59): Starting activity: Intent { act=android.intent.action.MAIN cat= [android.intent.category.LAUNCHER] flg=0x10000000 cmp=com.handyapps.easymoney/.EasyMoney } 05-23 10:15:17.602: DEBUG/AndroidRuntime(431): Shutting down VM 05-23 10:15:17.662: DEBUG/dalvikvm(431): Debugger has detached; object registry had 1 entries 05-23 10:15:17.742: INFO/AndroidRuntime(431): NOTE: attach of thread 'Binder Thread #3' failed 05-23 10:15:17.912: INFO/ActivityManager(59): Start proc com.handyapps.easymoney for activity com.handyapps.easymoney/.EasyMoney: pid=438 uid=10035 gids={1006, 1015} 05-23 10:15:19.032: DEBUG/AndroidRuntime(438): Shutting down VM 05-23 10:15:19.032: WARN/dalvikvm(438): threadid=1: thread exiting with uncaught exception (group=0x4001d800) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): FATAL EXCEPTION: main 05-23 10:15:19.062: ERROR/AndroidRuntime(438): java.lang.RuntimeException: Unable to instantiate application com.handyapps.easymoney.EasyMoney: java.lang.ClassCastException: com.handyapps.easymoney.EasyMoney 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread$PackageInfo.makeApplication (ActivityThread.java:649) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread.handleBindApplication (ActivityThread.java:4232) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread.access$3000(ActivityThread.java:125) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread$H.handleMessage(ActivityThread.java:2071) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.os.Handler.dispatchMessage(Handler.java:99) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.os.Looper.loop(Looper.java:123) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread.main(ActivityThread.java:4627) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at java.lang.reflect.Method.invokeNative(Native Method) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at java.lang.reflect.Method.invoke(Method.java:521) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run (ZygoteInit.java:868) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:626) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at dalvik.system.NativeStart.main(Native Method) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): Caused by: java.lang.ClassCastException: com.handyapps.easymoney.EasyMoney 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.Instrumentation.newApplication(Instrumentation.java:957) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.Instrumentation.newApplication(Instrumentation.java:942) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): at android.app.ActivityThread$PackageInfo.makeApplication (ActivityThread.java:644) 05-23 10:15:19.062: ERROR/AndroidRuntime(438): ... 11 more 05-23 10:15:19.082: WARN/ActivityManager(59): Force finishing activity com.handyapps.easymoney/.EasyMoney 05-23 10:15:19.592: WARN/ActivityManager(59): Activity pause timeout for HistoryRecord{450018f0 com.handyapps.easymoney/.EasyMoney} //////////////THE ANDROID MANIFEST FILE//// <uses-permission android:name="android.permission.READ_PHONE_STATE"/> <uses-permission android:name="android.permission.CAMERA"/> <uses-permission android:name="android.permission.WRITE_EXTERNAL_STORAGE" /> <uses-feature android:name="android.hardware.camera" /> <uses-sdk android:minSdkVersion="3" android:targetSdkVersion="4" /> <application android:icon="@drawable/icon" android:name="@string/app_name" android:label="@string/app_name" android:debuggable="false"> <activity android:name=".EasyMoney" android:label="@string/app_name" android:theme="@android:style/Theme.NoTitleBar" android:launchMode="singleTask" android:clearTaskOnLaunch="true"> <intent-filter> <action android:name="android.intent.action.MAIN" /> <category android:name="android.intent.category.LAUNCHER" /> </intent-filter> </activity> <activity android:name=".TranList" android:label="@string/app_name" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".TranEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".BillReminderEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".BillReminderList" android:launchMode="singleTop" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".BudgetList" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".BudgetEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".Search" android:theme="@style/CustomDialogTheme" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".PasscodeEntry" android:theme="@style/CustomDialogTheme" android:windowSoftInputMode="stateAlwaysHidden" android:screenOrientation="portrait"/> <activity android:name=".AccountList" android:theme="@android:style/Theme.Light.NoTitleBar"> </activity> <activity android:name=".AccountEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".UserSettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".CurrencySettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".DisplaySettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".BackupSettingsEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".CategoryList" android:theme="@android:style/Theme.Light.NoTitleBar" /> <activity android:name=".CategoryEdit" android:theme="@android:style/Theme.Light.NoTitleBar" android:windowSoftInputMode="stateAlwaysHidden"/> <activity android:name=".ExpenseByCategory" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".BalanceReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".MonthlyExpenseReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".MonthlyIncomeReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".MonthlyCashflowReport" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".PhotoList" android:theme="@android:style/Theme.Light.NoTitleBar" /> <activity android:name=".ExpenseByPayee" android:theme="@android:style/Theme.Light.NoTitleBar"/> <activity android:name=".ExpenseBySubCategory" android:theme="@android:style/Theme.Light.NoTitleBar"/> <service android:name="StartAlarm_Service"> <intent-filter> <action android:name="com.handyapps.easymoney.StartAlarm_Service" /> </intent-filter> </service> <service android:name=".AlarmService_Service" android:process=":remote" /> <receiver android:name="StartupIntentReceiver"> <intent-filter> <action android:name="android.intent.action.BOOT_COMPLETED" /> <category android:name="android.intent.category.HOME" /> </intent-filter> </receiver> <receiver android:name=".WidgetProvider" android:label="@string/widget_name"> <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget" /> </receiver> <receiver android:name=".WidgetProvider" android:label="@string/widget_name"> <intent-filter> <action android:name="android.appwidget.action.APPWIDGET_UPDATE" /> <data android:scheme="easymoney_widget" /> </intent-filter> <meta-data android:name="android.appwidget.provider" android:resource="@xml/widget" /> </receiver> <receiver android:name=".WidgetProvider"> <intent-filter> <action android:name="com.handyapps.easymoney.WIDGET_CONTROL" /> <data android:scheme="easymoney_widget" /> </intent-filter> </receiver> </application> The main startup class is com.handyapps.easymoney.EasyMoney. I placed a breakpoint at the start of the onCreate() method but I discovered it didn't even reach there. Somehow, the application just couldn't be loaded in Android 2.2... but it works perfectly fine for all the previous Android versions. Been trying to find the cause for the past 2 days but am totally stumped!! Any help will be greatly appreciated!!!! Thanks!! Roland

    Read the article

  • How should I distribute a pre-built perl module, and what version of perl do I build for?

    - by Mike Ellery
    This is probably a multi-part question. Background: we have a native (c++) library that is part of our application and we have managed to use SWIG to generate a perl wrapper for this library. We'd now like to distribute this perl module as part of our application. My first question - how should I distribute this module? Is there a standard way to package pre-built perl modules? I know there is ppm for the ActiveState distro, but I also need to distribute this for linux systems. I'm not even sure what files are required to distribute, but I'm guessing it's the pm and so files, at a minimum. My next question - it looks like I might need to build my module project for each version of perl that I want to support. How do I know which perl versions I should build for? Are there any standard guidelines... or better yet, a way to build a package that will work with multiple versions of perl? Sorry if my questions make no sense - I'm fairly new to the compiled module aspects of perl. CLARIFICATION: the underlying compiled source is proprietary (closed source), so I can't just ship source code and the appropriate make artifacts for the package. Wish I could, but it's not going to happen in this case. Thus, I need a sane scheme for packaging prebuilt binary files for my module.

    Read the article

  • Gzip http compression problem on iis7

    - by wpfwannabe
    My web hosting provider is running IIS7 and I am having loads of trouble to get gzip compression to work properly. Host admins say compression is installed. I can confirm compression using some online checking services but not with others. PageSpeed Firefox add-on also says the site is uncompressed. I am personally sitting behind a Squid proxy but web.config settings should take care of proxy issue. Below is the relevant web.config snippet. Most of it is borrowed from various sites. Any thoughts? <urlCompression doDynamicCompression="true" dynamicCompressionBeforeCache="true" doStaticCompression="true" /> <httpCompression cacheControlHeader="max-age=86400" noCompressionForHttp10="False" noCompressionForProxies="False" sendCacheHeaders="True" dynamicCompressionEnableCpuUsage="89" dynamicCompressionDisableCpuUsage="90" minFileSizeForComp="1" noCompressionForRange="False"> <scheme name="gzip" dll="%Windir%\system32\inetsrv\gzip.dll" /> <dynamicTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </dynamicTypes> <staticTypes> <add mimeType="text/*" enabled="true" /> <add mimeType="message/*" enabled="true" /> <add mimeType="application/javascript" enabled="true" /> <add mimeType="*/*" enabled="false" /> </staticTypes> </httpCompression>

    Read the article

  • UISearchBar and UINavigationItem

    - by tbehunin
    I can't seem to get a UISearchBar to position itself from the far left to the far right in the navigation bar. In the -(void)viewDidLoad method, I have the following code: `UISearchBar *sb = [[UISearchBar alloc] initWithFrame:self.tableView.tableHeaderView.frame]; sb.delegate = self; self.navigationItem.titleView = sb; [sb sizeToFit]; [sb release];` When you build and run, it looks just fine at first glance. However, looking more closely, you can tell there is a margin/space on the left. This wouldn't be a big deal in the grand scheme of things, but when I tap the search bar to start a search, I animate the cancel button into view. Because the search bar is positioned slightly to the right, the animation is jerky and the cancel button falls off the end like so: link text It seems as if the UINavigationItem is like a table with three cells, where there is a padding on the first and last which I can't remove - nor does there seem to be a way to 'merge' it all together and then place the search bar there. I know this look is possible, because the AppStore search has a search bar in the navigation bar and it goes all the way to the edges. Anyone know how to get the search bar to go all the way to the edges so my slide-in cancel button animation will work properly?

    Read the article

  • WCF, Metadata and BIGIP - Can I force the correct url for the WSDL items?

    - by Yossi Dahan
    We have a WCF service hosted on ServerA which is a server with no-direct Internet access and has a non-Internet routable IP address. The service is fronted by BIGIP which handles SSL encryption and decryption and forwards the unencrypted request to ServerA (at the moment it does NOT actually do any load balancing, but that is likely to be added in the future) on a specific port. What that means is that our clients would be calling the service through https://www.OurDomain.com/ServiceUrl and would get to our service on http://SeverA:85/ServiceUrl through the BIGIP device; When we browse to the WSDL published on https://www.OurDomain.com/ServiceUrl all the addresses contained in the WSDL are based on the http://SeverA:85/ServiceUrl base address We figured out that we could use the host headers setting to set the domain, but our problem is that while this would sort out the domain, we would still be using the wrong scheme – it would use http://www.OurDomain.com/ServiceUrl while we need it to be Https. Also – as we have other services (asmx based) hosted on that server we had some issues setting the host headers, and so we thought we could get away with creating another site on the server (using, say, port 82) and set the host header on that; now, on top of the http/https problem we have an issue as the WSDL contains the port number in all the urls, where BigIP works on port 443 (for the SSL) Is there a more flexible solution than implementing Host Headers? Ideally we need to retain flexibility and ease of supportability. Thanks for any help…

    Read the article

  • Using .NET's HttpWebRequest to download a multitude of files in a row

    - by Cornelius
    I have an application that needs to download several files in a row in succession (sometimes a few thousand). However, what ends up happening when several files need to be downloaded is I get an exception with an inner exception of type SocketException and the error code 10048 (WSAEADDRINUSE). I did some digging and basically it's because the server has run out of sockets (and they are all waiting for 240s or so before they become available again) - not coincidentally it starts happening around the 1024 file range. I would expect that the HttpWebRequest/ServicePointManager would be reusing my connection, but apparently it is not (and the files are https, so that may be part of it). I never saw this problem in the C++ code that this was ported from (but that doesn't mean it didn't ever happen - I'd be surprised if it was, though). I am properly closing the WebRequest object and the HttpWebRequest object has KeepAlive set to true by default. Next my intent is to fiddle around with ServicePointManager.SetTcpKeepAlive(). However, I can't see how more people haven't run into this problem. Has anyone else run into the problem, and if so, what did you do to get around it? Currently I have a retry scheme that detects this error and waits it out, but that doesn't seem like the right thing to do. Here's some basic code to verify what I'm doing (just in case I'm missing closing something): WebRequest webRequest = WebRequest.Create(uri); webRequest.Method = "GET"; webRequest.Credentials = new NetworkCredential(username, password); WebResponse webResponse = webRequest.GetResponse(); try { using(Stream stream = webResponse.GetResponseStream()) { // read the stream } } finally { webResponse.Close() }

    Read the article

  • posting nutch data into a BASIC auth secured Solr instance

    - by mlathe
    Hi. I've secured a solr instance using BASIC auth, kind of how it is shown here: http://blog.comtaste.com/2009/02/securing_your_solr_server_on_t.html Now i'm trying to update my batch processes to push data into the authenticated instance. The ones using "curl" are easy, but i also have a Nutch crawl that uses the "solrindex" command to push data into Solr. When i do that i get this error: 2010-02-22 12:09:28,226 INFO auth.AuthChallengeProcessor - basic authentication scheme selected 2010-02-22 12:09:28,229 INFO httpclient.HttpMethodDirector - No credentials available for BASIC 'Tomcat Manager Application'@ninja:5500 2010-02-22 12:09:28,236 WARN mapred.LocalJobRunner - job_local_0001 org.apache.solr.common.SolrException: Unauthorized Unauthorized request: http://ninja:5500/solr/foo/update?wt=javabin&version=2.2 at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:343) at org.apache.solr.client.solrj.impl.CommonsHttpSolrServer.request(CommonsHttpSolrServer.java:183) at org.apache.solr.client.solrj.request.UpdateRequest.process(UpdateRequest.java:217) at org.apache.solr.client.solrj.SolrServer.add(SolrServer.java:48) at org.apache.nutch.indexer.solr.SolrWriter.close(SolrWriter.java:69) at org.apache.nutch.indexer.IndexerOutputFormat$1.close(IndexerOutputFormat.java:48) at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:447) at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:170) 2010-02-22 12:09:29,134 FATAL solr.SolrIndexer - SolrIndexer: java.io.IOException: Job failed! at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:1232) at org.apache.nutch.indexer.solr.SolrIndexer.indexSolr(SolrIndexer.java:73) at org.apache.nutch.indexer.solr.SolrIndexer.run(SolrIndexer.java:95) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:65) at org.apache.nutch.indexer.solr.SolrIndexer.main(SolrIndexer.java:104) Apparently nutch uses SolrJ to push the content, and after going through the solrj code, it's clear that it uses commons-httpclient without providing a way to set the credentials. Here are my question(s) Is this possible to do? ie push from nutch into a BASIC auth secured Solr instance? Is it possible to tell commons-httpclient about a credential without explicitly doing an _httpclient.getState().setCredentials(...)? Anyother ideas? One idea i had was to use an IPfiltering Valve for just the "update" Solr webservices. That would mean you could only make an update call from certain nodes. Thanks

    Read the article

< Previous Page | 46 47 48 49 50 51 52 53 54 55 56 57  | Next Page >