Search Results

Search found 15798 results on 632 pages for 'authentication required'.

Page 462/632 | < Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >

  • SpaceX’s Falcon 9 Launch Success And Reusable Rockets Test Partially Successful

    - by Gopinath
    Elon Musk’s SpaceX is closing on the dream of developing reusable rockets and likely in an year or two space launch rockets will be reusable just like flights, ships and cars. Today SpaceX launched an upgraded Falcon 9 rocket in to space to deliver satellites as well as to test their reusable rocket launching technology. All on board satellites were released on to the orbit and the first stage of rocket partially succeeded in returning back to Earth. This is a huge leap in space technology.   Couple of years ago reusable rockets were considered as impossible. NASA, Russian Space Agency, China, India or for that matter any other space agency never even attempted to build reusable rockets. But SpaceX’s revolutionary technology partially succeeded in doing the impossible! Elon Musk founded SpaceX with the goal of building reusable rockets and transporting humans to & from other planets like Mars. He says If one can figure out how to effectively reuse rockets just like airplanes, the cost of access to space will be reduced by as much as a factor of a hundred.  A fully reusable vehicle has never been done before. That really is the fundamental breakthrough needed to revolutionize access to space. Normally the first stage of a rocket falls back to Earth after burning out and is destroyed. But today SpaceX reignited first stage rocket after its separation and attempted to descend smoothly on to ocean’s surface. Though it did not fully succeed, the test was partially successful and SpaceX was able to recovers portions of first stage. Rocket booster relit twice (supersonic retro & landing), but spun up due to aero torque, so fuel centrifuged & we flamed out — Elon Musk (@elonmusk) September 29, 2013 With the partial success of recovering first stage, SpaceX gathered huge amount of information and experience it can use to improve Falcon 9 and build a fully reusable rocket. In post launch press conference Musk said if things go "super well", could refly a Falcon 9 1st stage by the end of next year. Falcon 9 Launch Video Next reusable first tests delayed by at least two launches SpaceX has a busy schedule for next several months with more than 50 missions scheduled using the new Falcon 9 rocket. Ten of those missions are to fly cargo to the International Space Shuttle for NASA.  SpaceX announced that they will not attempt to recover the first stage of Falcon 9 in next two missions. The next test will be conducted on  the fourth mission of Falcon 9 which is planned to carry cargo to Internation Space Station sometime next year. This will give time required for SpaceX to analyze the information gathered from today’s mission and improve first stage reentry systems. More reading Here are few interesting sources to read more about today’s SpaceX launch SpaceX post mission press conference details and discussion on Reddit Giant Leaps for Space Firms Orbital, SpaceX Hacker News community discussion on SpaceX launch SpaceX Launches Next-Generation Private Falcon 9 Rocket on Big Test Flight

    Read the article

  • Delegate performance of Roslyn Sept 2012 CTP is impressive

    - by dotneteer
    I wanted to dynamically compile some delegates using Roslyn. I came across this article by Piotr Sowa. The article shows that the delegate compiled with Roslyn CTP was not very fast. Since the article was written using the Roslyn June 2012, I decided to give Sept 2012 CTP a try. There are significant changes in Roslyn Sept 2012 CTP in both C# syntax supported as well as API. I found Anoop Madhisidanan’s article that has an example of the new API. With that, I was able to put together a comparison. In my test, the Roslyn compiled delegate is as fast as C# (VS 2012) compiled delegate. See the source code below and give it a try. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Diagnostics; using Roslyn.Compilers; using Roslyn.Scripting.CSharp; using Roslyn.Scripting; namespace RoslynTest { class Program { public Func del; static void Main(string[] args) { Stopwatch stopWatch = new Stopwatch(); Program p = new Program(); p.SetupDel(); //Comment out this line and uncomment the next line to compare //p.SetupScript(); stopWatch.Start(); int result = DoWork(p.del); stopWatch.Stop(); Console.WriteLine(result); Console.WriteLine("Time elapsed {0}", stopWatch.ElapsedMilliseconds); Console.Read(); } private void SetupDel() { del = (s, i) => ++s; } private void SetupScript() { //Create the script engine //Script engine constructor parameters go changed var engine=new ScriptEngine(); //Let us use engine's Addreference for adding the required //assemblies new[] { typeof (Console).Assembly, typeof (Program).Assembly, typeof (IEnumerable<>).Assembly, typeof (IQueryable).Assembly }.ToList().ForEach(asm => engine.AddReference(asm)); new[] { "System", "System.Linq", "System.Collections", "System.Collections.Generic" }.ToList().ForEach(ns=>engine.ImportNamespace(ns)); //Now, you need to create a session using engine's CreateSession method, //which can be seeded with a host object var session = engine.CreateSession(); var submission = session.CompileSubmission>("new Func((s, i) => ++s)"); del = submission.Execute(); //- See more at: http://www.amazedsaint.com/2012/09/roslyn-september-ctp-2012-overview-api.html#sthash.1VutrWiW.dpuf } private static int DoWork(Func del) { int result = Enumerable.Range(1, 1000000).Aggregate(del); return result; } } }  Since Roslyn Sept 2012 CTP is already over a year old, I cannot wait to see a new version coming out.

    Read the article

  • Site Search Engine for 1,000 page website

    - by Ian
    I manage a website with about 1,000 articles that need to be searchable by my members. The site search engines I've tried all had their own problems: Fluid Dynamics Search Engine Since it's written in perl, it was a bit hacky to integrate with my PHP-based CMS. I basically had to file_get_contents the search results page. However, FDSE had the best search results. Google CSE Ugh, the search results SUCK. It can't find documents even using unique strings. I'm so surprised that a Google search product is this bad. Nor can I get any answers on their 'help' forums, and I am a paying user. Boo, Google. Boo. Sphider Again, bad search results. Unable to locate some phrases used in link text. Better results than Google CSE though. Shame on Google that a free PHP script has better search results than their paid application. IndexTank This one looked really promising. I got all set up with their PHP API client. But it would only randomly add articles that I submitted. Out of 700+ articles I pushed to the index through their API, only 8 made it in. Unable to find any help on this subject. Update for IndexTank -- Got the above issue fixed, so this looks most promising so far. The site itself runs on php/mysql and FreeBSD, though this shouldn't matter for a web crawling indexer. I've looked at Lucene, but I don't know anything about Java or installing Java programs on my web server. I also do not have root access on my web server, if this would be required for installation. I really don't need a lot of fancy features. It just needs to be able to crawl my web site and return great (even decent!) search results. I don't need any crazy search operators. It doesn't need to index off my primary domain. It just needs to work! Thanks, Hive Mind!

    Read the article

  • SQL SERVER – Various Leap Year Logics

    - by pinaldave
    Earlier I wrote one article on Leap Year and created one video about Leap Year. My point of view was to demonstrate how we can use SQL Server 2012 features to identify Leap year. How ever during the conversation I had some really good conversation. Here are updates for those who have missed reading the excellent comments on the blog. Incorrect Logic There are so many people still think Leap Year is the event which is consistently happening at every four year and the way to find it is divide the year with 4 and if the remainder is 0. That year is leap year. Well, it is not correct. Comment by David Bridge Check out this excerpt from wikipedia page http://en.wikipedia.org/wiki/Leap_year “most years that are evenly divisible by 4 are leap years…” “…Some exceptions to this rule are required since the duration of a solar year is slightly less than 365.25 days. Years that are evenly divisible by 100 are not leap years, unless they are also evenly divisible by 400, in which case they are leap years. For example, 1600 and 2000 were leap years, but 1700, 1800 and 1900 were not. Similarly, 2100, 2200, 2300, 2500, 2600, 2700, 2900 and 3000 will not be leap years, but 2400 and 2800 will be.” If you use logic of divide by 4 and remainder is 0 to find leap year, you will may end up with inaccurate result. The correct way to identify the year is to figure out the days of February and if the count is 29, the year is for sure leap year. Valid Alternate Solutions Comment by sainswor99insworth IIF((@Year%4=0 AND @Year%100 != 0) OR @Year%400=0, 1,0) Comment by Madhivanan Madhivanan has written a blog post about an year ago where he listed multiple ways to find leap year. Comment by Jayan DECLARE @year INT SET @year = 2012 IF (((@year % 4 = 0) AND (@year % 100 != 0)) OR (@year % 400 = 0)) PRINT ’1' ELSE print ’0' Comment by David DECLARE @Year INT = 2012 SELECT ISDATE('2/29/' + CAST(@Year AS CHAR(4))) Comment by David Bridge Incidentally – Another approach would be to take one day off March 1st and see if it is 29. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Innovation and the Role of Social Media

    - by Brian Dirking
    A very interesting post by Andy Mulholland of CAP Gemini this week – “The CIO is trapped between the CEO wanting innovation and the CFO needing compliance” – had many interesting points: “A successful move in one area won’t be recognized and rapidly implemented in other areas to multiply the benefits, or worse unsuccessful ideas will get repeated adding to the cost and time wasted. That’s where the need to really address the combination of social networking, collaboration, knowledge management and business information is required.” Without communicating what works and what doesn’t, the innovations of our organization may be lost, and the failures repeated. That makes sense. If you liked Andy Mulholland’s blog post, you need to hear Howard Beader’s presentation at Enterprise 2.0 Conference on innovation and the role of social media. (Howard will be speaking in the Market Leaders Session at 1 PM on Wednesday June 22nd). Some of the thoughts Howard will share include: • Innovation is more than just ideas, it’s getting ideas to market, and removing the obstacles that stand in the way • Innovation is about parallel processing – you can’t remove the obstacles one by one because you will get to market too late • Innovation can be about product innovation, but it can also be about process innovation This brings us to Andy’s second issue he raises: "..the need for integration with, and visibility of, processes to understand exactly how the enterprise functions and delivers on its policies…" Andy goes on to talk about this from the perspective of compliance and the CFO’s concerns. And it’s true: innovation can come both in product innovation, but also internal process innovation. And process innovation can have as much impact as product innovation.  New supply chain models can disrupt an industry overnight. Many people ignore process innovation as a benefit of social business, because it is perceived as a bottom line rather than top line impact. But it can actually impact your top line by changing your entire business model. Oracle WebCenter sits at this crossroads between product innovation and process innovation, enabling you to drive go-to-market innovations through internal social media tools, removing obstacles in parallel, and also providing you deep insight into your processes so you can identify bottlenecks and realize whole new ways of doing business. Learn more about how at the Enterprise 2.0 Conference, where Oracle will be in booth #213 showing Oracle WebCenter and Oracle Fusion Applications.

    Read the article

  • How can I troubleshoot flash player/hardware conflict?

    - by sparthikas
    OBJECTIVE: Have a web browser on my Ubuntu install that can play youtube and hulu videos. Also would like to understand problem so that I can fix it again if I change software. Workarounds welcome, technical understanding and solution preferable. SYMPTOMS: Flash does not run - cannot make the right-click menu appear, an empty box is where video should be, changes to black box when hovering over other links. The "The Adobe Flash plugin has crashed" message does not appear with its sad lego face. cannot activate proprietary graphics driver - causes system to reboot to a prompt. SOLUTIONS TRIED: Replaced OS (tried slackware 13.37, fedora 17, linuxmint 13 maya, gentoo, lubuntu, and even winXP. lubuntu confirmed to work, don't remember how much tweaking, if any, this required. Slack, fedora, mint, and gentoo all failed to run flash just like ubuntu) many reinstalls of flash player via different methods, including cleaning up old installs first, also tried gnash and lightspark. replaced graphics card (replaced HIS IceQ Radeon HD 4670 AGP with older GeForce 5700 LE no change in problem) flash does successfully work on winXP installation with Catalyst AGP hotfix driver applied, however I consider windows wholly unacceptable for web browsing due to lack of security. Lubuntu install also works, however I do not want to be tied down to just using Lubuntu on this computer. SYSINFO: Have latest versions of Ubuntu, Firefox, and Flash on fresh Ubuntu install. Using Gigabyte 7s748 motherboard with Athlon XP 2800+ and 3 GB of RAM with Radeon HD 4670 AGP card, also a Dell soundblaster live series sound card (due to malfunction of onboard sound on motherboard) Wired internet connection, Maxtor 6Y120L3 HDD, Sony DVD RW AW-Q170A, Dell M993s monitor. NOTES: I do not know if the graphics driver issue and the flash troubles are linked. Substitution of older graphics card having same flash troubles seems to suggest they aren't. My troubleshooting method is rather reductionist, consisting mainly of "replace things until you find out what was causing the error by process of elimination" only it seems that this must be a conflict which arises when software decides how to configure itself on my hardware. That is, I know the hardware can run Flash, and I know that on other systems the same software can too, but somehow the combination fails. Consequently I feel out of my depth. I will keep trying things off and on, but I have spent probably about 30 man-hours in the last 4 months working on this problem with no joy other than the lubuntu workaround. Any help will be appreciated, I will be checking back and posting updates. Any pertinent questions regarding me or my computer will be answered, outputs from config files can be accessed and posted (IDK which ones or what parts to post).

    Read the article

  • mysql completely removing

    - by Dmitry Teplyakov
    I broke my mysql and now I want to completely reinstall it. I tried: $ sudo apt-get install --reinstall mysql-server $ sudo apt-get remove --purge mysql-client mysql-server But always I see popup with proposal to change root password, I change it and got an error that I can change it.. $ sudo apt-get remove --purge mysql-client mysql-server Reading package lists... Done Building dependency tree Reading state information... Done Package mysql-client is not installed, so not removed Package mysql-server is not installed, so not removed The following packages were automatically installed and are no longer required: libmygpo-qt1 libqtscript4-network libqtscript4-gui libtag-extras1 libqtscript4-sql libqtscript4-xml amarok-utils amarok-common libqtscript4-uitools liblastfm0 libloudmouth1-0 libqtscript4-core Use 'apt-get autoremove' to remove them. 0 upgraded, 0 newly installed, 0 to remove and 0 not upgraded. 1 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up mysql-server-5.5 (5.5.28-0ubuntu0.12.04.2) ... 121114 19:04:03 [Note] Plugin 'FEDERATED' is disabled. 121114 19:04:03 InnoDB: The InnoDB memory heap is disabled 121114 19:04:03 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121114 19:04:03 InnoDB: Compressed tables use zlib 1.2.3.4 121114 19:04:03 InnoDB: Initializing buffer pool, size = 128.0M 121114 19:04:03 InnoDB: Completed initialization of buffer pool InnoDB: Error: auto-extending data file ./ibdata1 is of a different size InnoDB: 0 pages (rounded down to MB) than specified in the .cnf file: InnoDB: initial 640 pages, max 0 (relevant if non-zero) pages! 121114 19:04:03 InnoDB: Could not open or create data files. 121114 19:04:03 InnoDB: If you tried to add new data files, and it failed here, 121114 19:04:03 InnoDB: you should now edit innodb_data_file_path in my.cnf back 121114 19:04:03 InnoDB: to what it was, and remove the new ibdata files InnoDB created 121114 19:04:03 InnoDB: in this failed attempt. InnoDB only wrote those files full of 121114 19:04:03 InnoDB: zeros, but did not yet use them in any way. But be careful: do not 121114 19:04:03 InnoDB: remove old data files which contain your precious data! 121114 19:04:03 [ERROR] Plugin 'InnoDB' init function returned error. 121114 19:04:03 [ERROR] Plugin 'InnoDB' registration as a STORAGE ENGINE failed. 121114 19:04:03 [ERROR] Unknown/unsupported storage engine: InnoDB 121114 19:04:03 [ERROR] Aborting 121114 19:04:03 [Note] /usr/sbin/mysqld: Shutdown complete start: Job failed to start invoke-rc.d: initscript mysql, action "start" failed. dpkg: error processing mysql-server-5.5 (--configure): subprocess installed post-installation script returned error exit status 1 Errors were encountered while processing: mysql-server-5.5 E: Sub-process /usr/bin/dpkg returned an error code (1) It is good for me that I have not any important databases, but..

    Read the article

  • Windows Azure Use Case: High-Performance Computing (HPC)

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx  Description: High-Performance Computing (also called Technical Computing) at its most simplistic is a layout of computer workloads where a “head node” accepts work requests, and parses them out to “worker nodes'”. This is useful in cases such as scientific simulations, drug research, MatLab work and where other large compute loads are required. It’s not the immediate-result type computing many are used to; instead, a “job” or group of work requests is sent to a cluster of computers and the worker nodes work on individual parts of the calculations and return the work to the scheduler or head node for the requestor in a batch-request fashion. This is typical to the way that many mainframe computing use-cases work. You can use commodity-based computers to create an HPC Cluster, such as the Linux application called Beowulf, and Microsoft has a server product for HPC using standard computers, called the Windows Compute Cluster that you can read more about here. The issue with HPC (from any vendor) that some organization have is the amount of compute nodes they need. Having too many results in excess infrastructure, including computers, buildings, storage, heat and so on. Having too few means that the work is slower, and takes longer to return a result to the calling application. Unless there is a consistent level of work requested, predicting the number of nodes is problematic. Implementation: Recently, Microsoft announced an internal partnership between the HPC group (Now called the Technical Computing Group) and Windows Azure. You now have two options for implementing an HPC environment using Windows. You can extend the current infrastructure you have for HPC by adding in Compute Nodes in Windows Azure, using a “Broker Node”.  You can then purchase time for adding machines, and then stop paying for them when the work is completed. This is a common pattern in groups that have a constant need for HPC, but need to “burst” that load count under certain conditions. The second option is to install only a Head Node and a Broker Node onsite, and host all Compute Nodes in Windows Azure. This is often the pattern for organizations that need HPC on a scheduled and periodic basis, such as financial analysis or actuarial table calculations. References: Blog entry on Hybrid HPC with Windows Azure: http://blogs.msdn.com/b/ignitionshowcase/archive/2010/12/13/high-performance-computing-on-premise-and-in-the-windows-azure-cloud.aspx  Links for further research on HPC, includes Windows Azure information: http://blogs.msdn.com/b/ncdevguy/archive/2011/02/16/handy-links-for-hpc-and-azure.aspx 

    Read the article

  • Three Ways to Get Started with MySQL Training

    - by Antoinette O'Sullivan
    Here is your chance to learn how this powerful relational database management system can make your life easier and more fun! This class covers all the basics and will get you on your way, with a solid foundation. This instructor led, hands-on class covers the fundamentals of SQL and relational databases, using MySQL[tm] as a teaching tool. You can take this 4 day instructor-led class in any of the following three ways: Training-On-Demand: See what Ben Krug, MySQL Support Engineer has to say about his experience taking the MySQL for Beginners TOD. With this streaming video delivery, you get started on taking the MySQL for Beginners course within 24 hrs of purchase, and follow the course at your own pace. Live-Virtual-Class: Take this class from your own desk - no travel required. There is a wide range of events on the schedule with delivery in English and German. In-Class: Travel to an education center to follow this class. Below is a sample of event on the schedule:  Location  Date  Delivery Language  Mechelen, Belgium  14 January 2013  English  London, England  3 December 2012  English  Hamburg, Germany  3 December 2012  German  Budapest, Hungary  5 February 2013  Hungarian  Riga, Latvia 18 February 2013   Latvian Amsterdam, Netherlands  10 December 2012  Dutch  Nieuwegein, Netherlands  18 February 2013  Dutch  Warsaw, Poland  26 November 2012   Polish  Lisbon, Portugal 25 March 2013  European Portugese   Porto, Portugal  25 March 2013  European Portugese  Barcelona, Spain 11 February 2013   Spanish  Madrid, Spain 8 January 2013   Spanish Nairobi, Kenya  14 January 2013   English  Cape Town, South Africa  22 July 2013  English  Pretoria, South Africa 22 April 2013  English Ottawa, Canada 17 December 2012  English  Toronto, Canada 17 December 2012   English  Montreal, Canada  17 December 2012 English  For more information on the Authentic MySQL Curriculum or to register your interest in an additional event, go to http://oracle.com/education/mysql. Note, many organizations deploy both Oracle Database and MySQL side by side to serve different needs, and as a database professional you can find training courses on both topics at Oracle University! Check out the upcoming Oracle Database training courses and MySQL training courses. Even if you're only managing Oracle Databases at this point of time, getting familiar with MySQL will broaden your career path with growing job demand.

    Read the article

  • Open World Day 1 Continued

    - by Antony Reynolds
    A Day in the Life of an Oracle OpenWorld Attendee Part II A couple of things I forgot to mention about yesterdays OpenWorld. First I attended a presentation on SOA Suite and Virtualization which explained how Oracle Virtual Assembly Builder (OVAB) can be used to accelerate the deployment of an Enterprise Deployment Guide (EDG) compliant SOA Suite infrastructure.  OVAB provides the ability to introspect a deployed software component such as WebLogic Server, SOA Suite or other components and extract the configuration and package it up for rapid deployment into an Oracle Virtual Machine.  OVAB allows multiple machines to be configured and connections made between the machines and outside resources such as databases.  That by itself is pretty cool and has been available for a while in OVAB.  What is new is that Oracle has done this for an EDG compliant installations and made it available as an OVAB assembly for customers to use, significantly accelerating the deployment of an EDG deployment.  A real help for customers standing up EDG environments, particularly in test, dev and QA environments. The other thing I forgot to mention was the most memorable demo I saw at OpenWorld.  This was done by my co-author Matt Wright who was showcasing the products of his company Rubicon Red.  They showed a really cool application called OneSpot which puts all the information about a single users business processes in one spot!  Apparently a customer suggested the name.  It allows business flows to be defined that map onto events.  As events occur the status of the business flow is updated to reflect the change.  The interface is strongly reminiscent of social media sites and provides a graphical view of business flows.  So how does this differ from BPEL and BPM process flows?  The OneSpot process flow is more like a BAM process flow, it is based on events arriving from multiple sources, and is focused on the clients view of the process, not the actual business process.  This is important because it allows an end user to get a view of where his current business flow is and what actions, if any, are required of him.  This by itself is great, but better still is that OneSpot has a real time updating view of events that have occurred (BAM style no need to refresh the browser).  This means that as new events occur the end user can see them and jump to the business flow or take other appropriate actions.  Under the covers OneSpot makes use of Oracle Human Workflow to provide a forms interface, but this is not the HWF GUI you know!  The HWF GUI screens are much prettier and have more of a social media feel about them due to their use of images and pulling in relevant related information.  If you are at OOW I strongly recommend you visit Matt or John at the Rubicon Red stand and ask, no demand a demo of OneSpot!

    Read the article

  • Antenna Aligner Part 10: Updates and emails…

    - by Chris George
    Since my last post back in July, I’ve not done huge amounts of work on my app for two reasons. Firstly, no time! Secondly, I wanted to leave it out in the wild for a while and see what happened. Well, what happened?  over 1,300 users, that’s what’s happened!  This uptake is beyond my wildest expectations, and apart from a couple of issues that I’ll mention in a minute, most of the feedback has been very positive indeed! I’ve had several emails giving me feedback and reporting issues, all of which I have made a point of replying to immediately. This act alone has met with favourable replies! One of the main issues was with iPad. So it turns out that my app is only accurate in portrait mode. Turning it into landscape will offset the direction by +-90degrees! Whoops! I think I’ve fixed this by disabling the orientation switching, but I have not yet had an iPad to test this on. I had several emails from iPod Touch users claiming the app did not work for them. Specifically, the compass view did not work. On investigation, it turns out that the iPod Touch does not have the compass hardware required to do this. Unfortunately there is no way to exclude iPod Touch’s from the list of supported devices, so I’ve just had to make it very clear in the itunes description that the device is not fully supported.  You can still get the list of transmitters, but you then have to use a real compass to get the bearing. But that’s not the end of the world. Several customers have requested the aerial polarisation to be displayed in the app. I was already working on this, and the data was already there, it was just a case of displaying this in the UI. I have a solution now, and this will be in the next release. Of course, with the Digital switchover in full swing across the UK, there have been one set of data updates (in 1.0.3), and another is due shortly. This reflects the transmitters as they switch over the digital fully and their power output increased. So all in all I’m very pleased with the feedback I’ve had, and I’m looking to get the next release out there by early December (allowing for the 2-3 week Apple approval lag!)  

    Read the article

  • October 2012 Critical Patch Update and Critical Patch Update for Java SE Released

    - by Eric P. Maurice
    Hi, this is Eric Maurice. Oracle has just released the October 2012 Critical Patch Update and the October 2012 Critical Patch Update for Java SE.  As a reminder, the release of security patches for Java SE continues to be on a different schedule than for other Oracle products due to commitments made to customers prior to the Oracle acquisition of Sun Microsystems.  We do however expect to ultimately bring Java SE in line with the regular Critical Patch Update schedule, thus increasing the frequency of scheduled security releases for Java SE to 4 times a year (as opposed to the current 3 yearly releases).  The schedules for the “normal” Critical Patch Update and the Critical Patch Update for Java SE are posted online on the Critical Patch Updates and Security Alerts page. The October 2012 Critical Patch Update provides a total of 109 new security fixes across a number of product families including: Oracle Database Server, Oracle Fusion Middleware, Oracle E-Business Suite, Supply Chain Products Suite, Oracle PeopleSoft Enterprise, Oracle Customer Relationship Management (CRM), Oracle Industry Applications, Oracle FLEXCUBE, Oracle Sun products suite, Oracle Linux and Virtualization, and Oracle MySQL. Out of these 109 new vulnerabilities, 5 affect Oracle Database Server.  The most severe of these Database vulnerabilities has received a CVSS Base Score of 10.0 on Windows platforms and 7.5 on Linux and Unix platforms.  This vulnerability (CVE-2012-3137) is related to the “Cryptographic flaws in Oracle Database authentication protocol” disclosed at the Ekoparty Conference.  Because of timing considerations (proximity to the release date of the October 2012 Critical Patch Update) and the need to extensively test the fixes for this vulnerability to ensure compatibility across the products stack, the fixes for this vulnerability were not released through a Security Alert, but instead mitigation instructions were provided prior to the release of the fixes in this Critical Patch Update in My Oracle Support Note 1492721.1.  Because of the severity of these vulnerabilities, Oracle recommends that this Critical Patch Update be installed as soon as possible. Another 26 vulnerabilities fixed in this Critical Patch Update affect Oracle Fusion Middleware.  The most severe of these Fusion Middleware vulnerabilities has received a CVSS Base Score of 10.0; it affects Oracle JRockit and is related to Java vulnerabilities fixed in the Critical Patch Update for Java SE.  The Oracle Sun products suite gets 18 new security fixes with this Critical Patch Update.  Note also that Oracle MySQL has received 14 new security fixes; the most severe of these MySQL vulnerabilities has received a CVSS Base Score of 9.0. Today’s Critical Patch Update for Java SE provides 30 new security fixes.  The most severe CVSS Base Score for these Java SE vulnerabilities is 10.0 and this score affects 10 vulnerabilities.  As usual, Oracle reports the most severe CVSS Base Score, and these CVSS 10.0s assume that the user running a Java Applet or Java Web Start application has administrator privileges (as is typical on Windows XP). However, when the user does not run with administrator privileges (as is typical on Solaris and Linux), the corresponding CVSS impact scores for Confidentiality, Integrity, and Availability are "Partial" instead of "Complete", typically lowering the CVSS Base Score to 7.5 denoting that the compromise does not extend to the underlying Operating System.  Also, as is typical in the Critical Patch Update for Java SE, most of the vulnerabilities affect Java and Java FX client deployments only.  Only 2 of the Java SE vulnerabilities fixed in this Critical Patch Update affect client and server deployments of Java SE, and only one affects server deployments of JSSE.  This reflects the fact that Java running on servers operate in a more secure and controlled environment.  As discussed during a number of sessions at JavaOne, Oracle is considering security enhancements for Java in desktop and browser environments.  Finally, note that the Critical Patch Update for Java SE is cumulative, in other words it includes all previously released security fixes, including the fix provided through Security Alert CVE-2012-4681, which was released on August 30, 2012. For More Information: The October 2012 Critical Patch Update advisory is located at http://www.oracle.com/technetwork/topics/security/cpuoct2012-1515893.html The October 2012 Critical Patch Update for Java SE advisory is located at http://www.oracle.com/technetwork/topics/security/javacpuoct2012-1515924.html.  An online video about the importance of keeping up with Java releases and the use of the Java auto update is located at http://medianetwork.oracle.com/video/player/1218969104001 More information about Oracle Software Security Assurance is located at http://www.oracle.com/us/support/assurance/index.html  

    Read the article

  • Change default language settings in Visual Studio 2012

    - by sreejukg
    The first thing you need to do after the installation of Visual Studio 2012 is to choose the IDE preferences. Once you select your preferred collection of settings, the IDE will always choose dialogs and other options according to your selection. Nowadays developer’s needs to work with different programming environments and due to this, developers might need to reset the default settings. In this article, I am going to demonstrate how you can change the default settings in Visual Studio 2012. For the purpose of this demonstration, I have installed Visual Studio 2012 and selected C++ as my default environment settings. So now when I go to file -> new project, it will give me C++ templates by default as follows. If you want to select another language, you need to expand Other Languages section and select C# or VB. Now I am going to change these default settings. I am going to change the default language preference to C#. In Visual Studio 2012, go to tools menu and select Import and Export Settings. Here you have 3 options; one is to export the current settings so that the settings are saved for future use. Also you can import previously saved settings. The last option available is to reset it to default. It is a good Idea to export your settings and import it as you need in later stages. To reset the settings to default select the Reset option and click next. Now Visual Studio will ask you to whether you would like to save the settings, which can be used in future to restore. Select any one option and click next. For the purpose of this demo, I have selected not to save the settings. Click Next button to continue. Now Visual Studio will bring you the similar dialog that appears just after installation to select your IDE settings. Select the required settings from the available list and click Finish button. Click Finish once you are done. If everything OK, you will see the success message as below. Now go to file -> new Project, you will see the selected language appear by default. I selected C# in the previous step and the new project dialog appears as follows. Changing IDE settings in Visual Studio 2012 is very easy and straight forward.

    Read the article

  • Bind a Wijmo Grid to Salesforce.com Through the Salesforce OData Connector

    - by dataintegration
    This article will explain how to connect to any RSSBus OData Connector with Wijmo's data grid using JSONP. While the example will use the Salesforce Connector, the same process can be followed for any of the RSSBus OData Connectors. Step 1: Download and install both the Salesforce Connector from RSSBus and the Wijmo javascript library. Step 2: Next you will want to configure the Salesforce Connector to connect with your Salesforce account. If you browse to the Help tab in the Salesforce Connector application, there is a link to the Getting Started Guide which will walk you through setting up the Salesforce Connector. Step 3: Once you have successfully configured the Salesforce Connector application, you will want to open a Wijmo sample grid file to edit. This example will use the overview.html grid found in the Samples folder. Step 4: First, we will wrap the jQuery document ready function in a callback function for the JSONP service. In this example, we will wrap this in function called fnCallback which will take a single object args. <script id="scriptInit" type="text/javascript"> function fnCallback(args) { $(document).ready(function () { $("#demo").wijgrid({ ... }); }); }; </script> Step 5: Next, we need to format the columns object in a format that Wijmo's data grid expects. This is done by adding the headerText: element for each column. <script id="scriptInit" type="text/javascript"> function fnCallback(args) { var columns = []; for (var i = 0; i < args.columnnames.length; i++){ var col = { headerText: args.columnnames[i]}; columns.push(col); } $(document).ready(function () { $("#demo").wijgrid({ ... }); }); }; </script> Step 6: Now the wijgrid parameters are ready to be set. In this example, we will set the data input parameter to the args.data object and the columns input parameter to our newly created columns object. The resulting javascript function should look like this: <script id="scriptInit" type="text/javascript"> function fnCallback(args) { var columns = []; for (var i = 0; i < args.columnnames.length; i++){ var col = { headerText: args.columnnames[i]}; columns.push(col); } $(document).ready(function () { $("#demo").wijgrid({ allowSorting: true, allowPaging: true, pageSize: 10, data: args.data, columns: columns }); }); }; </script> Step 7: Finally, we need to add the JSONP reference to our Salesforce Connector's data table. You can find this by clicking on the Settings tab of the Salesforce Connector. Once you have found the JSONP URL, you will need to supply a valid table name that you want to connect with Wijmo. In this example, we will connect to the Lead table. You will also need to add authentication options in this step. In the example we will append the authtoken of the user who has access to the Salesforce Connector using the @authtoken query string parameter. IMPORTANT: This is not secure and will expose the authtoken of the user whose authtoken you supply in this step. There are other ways to secure the user's authtoken, but this example uses a query string parameter for simplicity. <script src="http://localhost:8181/sfconnector/data/conn/Lead.rsd?@jsonp=fnCallback&sql:query=SELECT%20*%20FROM%20Lead&@authtoken=<myAuthToken>" type="text/javascript"></script> Step 8: Now, we are done. If you point your browser to the URL of the sample, you should see your Salesforce.com leads in a Wijmo data grid.

    Read the article

  • Unable to either locate any wireless networks nor even connect to wifi

    - by Leo Chan
    I'm new to Linux. I currently have installed ubuntu 12.10. I had a previous problem with my wireless card (see url to see previous problem : How to enable wireless in a Fujitsu LH532?). It now shows Connect to hidden network and create new wireless network but now unfortunately it simply cannot find any wireless connections. I did have a very thorough look around about this problem such as wait a little longer since sometimes it cannot load all the wireless connections available that quickly. My wifi is a hidden network and I have used the connect to hidden network feature but it keeps asking for my wep key which has been checked 4 times (I counted) and it still seems to not work; It keeps asking for the WEP key. I did try both WEP 40/128-bit key and WPA & WPA2 since previously on my windows it worked; My family later decided to use WEP. I only have a quick fix using a usb wireless stick and I wish to have a more solid fix. Thanks Results from sudo iwlist wlan0 scan wlan0 Scan completed : Cell 01 - Address: 00:1E:73:C8:62:BD Channel:6 Frequency:2.437 GHz (Channel 6) Quality=25/70 Signal level=-85 dBm Encryption key:on ESSID:"EnigmaHome" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000cb3bb10a5c Extra: Last beacon: 696ms ago IE: Unknown: 000A456E69676D61486F6D65 IE: Unknown: 010482848B96 IE: Unknown: 030106 IE: Unknown: 0706484B20010B1E IE: Unknown: 2A0107 IE: Unknown: 32080C1218243048606C IE: Unknown: DD180050F2020101000003A4000027A4000042435E0062322F00 Cell 02 - Address: C8:3A:35:34:C1:60 Channel:6 Frequency:2.437 GHz (Channel 6) Quality=22/70 Signal level=-88 dBm Encryption key:on ESSID:"Tenda" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s; 9 Mb/s 18 Mb/s; 36 Mb/s; 54 Mb/s Bit Rates:6 Mb/s; 12 Mb/s; 24 Mb/s; 48 Mb/s Mode:Master Extra:tsf=000001336e70ffdd Extra: Last beacon: 716ms ago IE: Unknown: 000554656E6461 IE: Unknown: 010882848B961224486C IE: Unknown: 030106 IE: Unknown: 32040C183060 IE: Unknown: 0706434E20010D10 IE: Unknown: 33082001020304050607 IE: Unknown: 33082105060708090A0B IE: Unknown: DD270050F204104A0001101044000101104700102880288028801880A880C83A3534C160103C000101 IE: Unknown: 050400010000 IE: Unknown: 2A0106 IE: Unknown: 2D1AEC0117FFFF0000000000000000000000000000000C0000000000 IE: Unknown: 3D1606000500000000000000000000000000000000000000 IE: Unknown: 7F0101 IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : PSK Preauthentication Supported IE: Unknown: DD180050F2020101000003A4000027A4000042435E0062322F00 IE: Unknown: 0B05010089127A IE: Unknown: DD1E00904C33EC0117FFFF0000000000000000000000000000000C0000000000 IE: Unknown: DD1A00904C3406000500000000000000000000000000000000000000 IE: Unknown: DD07000C4304000000 Cell 03 - Address: 00:1E:73:C8:62:BF Channel:6 Frequency:2.437 GHz (Channel 6) Quality=47/70 Signal level=-63 dBm Encryption key:on ESSID:"\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00\x00" Bit Rates:1 Mb/s; 2 Mb/s; 5.5 Mb/s; 11 Mb/s Bit Rates:6 Mb/s; 9 Mb/s; 12 Mb/s; 18 Mb/s; 24 Mb/s 36 Mb/s; 48 Mb/s; 54 Mb/s Mode:Master Extra:tsf=000000cb3bac614e Extra: Last beacon: 1064ms ago IE: Unknown: 00110000000000000000000000000000000000 IE: Unknown: 010482848B96 IE: Unknown: 030106 IE: Unknown: 050C010200000000000000000000 IE: Unknown: 0706484B20010B1E IE: Unknown: 2A0107 IE: Unknown: 32080C1218243048606C IE: Unknown: DD070050F202000100

    Read the article

  • ASP.NET 4.0- Menu control enhancement.

    - by Jalpesh P. Vadgama
    Till asp.net 3.5 asp.net menu control was rendered through table. And we all know that it is very hard to have CSS applied to table. For a professional look of our website a CSS is must required thing. But in asp.net 4.0 Menu control is table less it will loaded with UL and LI tags which is easier to manage through CSS. Another problem with table is it will create a large html which will increase your asp.net page KB and decrease your performance. While with UL and LI Tags its very easy very short. So You page KB Size will also be down. Let’s take a simple example. Let’s Create a menu control in asp.net with four menu item like following. <asp:Menu ID="myCustomMenu" runat="server" > <Items> <asp:MenuItem Text="Menu1" Value="Menu1"></asp:MenuItem> <asp:MenuItem Text="Menu2" Value="Menu2"></asp:MenuItem> <asp:MenuItem Text="Menu3" Value="Menu3"></asp:MenuItem> <asp:MenuItem Text="Menu4" Value="Menu4"></asp:MenuItem> </Items></asp:Menu> It will render menu in browser like following. Now If we render this menu control with tables then HTML as you can see via view page source like following.   Now If in asp.net 4.0 It will be loaded with UL and LI tags and if you now see page source then it will look like following. Which will have must lesser HTML then it was earlier like following. So isn’t that great performance enhancement?.. It’s very cool. If you still like old way doing with tables then in asp.net 4.0 there is property called ‘RenderingMode’ is given. So you can set RenderingMode=Table then it will load menu control with table otherwise it will load menu control with UL and LI Tags. That’s it..Stay tuned for more..Happy programming.. Technorati Tags: Menu,Asp.NET 4.0

    Read the article

  • JCP EC Nominations and Meet the Candidates Call

    - by heathervc
    The Nominations period for the 2012 JCP EC Elections closes tomorrow, 11 October at midnight pacific time.  Eligible JCP Members (all current JSPA 2 signers) may nominate themselves.  You will need your Elections credentials to complete the nomination, which were sent to the primary contacts of all eligible JCP Members via email last week. This year all ratified (there are 4 proposed ratified candidates) and elected (there are 7 candidates so far) will appear on one ballot; the top 2 candidates will win elected seats. This year, the selected EC Members will serve a single year term.  Following the 2012 Elections, there will be one merged EC (approved through JSR 355), and a new JCP version, JCP 2.9 will be in effect.  In 2013, all EC members will stand for election to complete the merge process described in the JCP 2.9 process document. All of the candidates' nominations materials are now available. The ratified candidates are:  Cinterion, Credit Suisse, Fujitsu and HP.The elected candidates are:  Cisco Systems, CloudBees, Giuseppe Dell'Abate, London Java Community, MoroccoJUG, Software AG, and Zero Turnaround. Next week, 18 October, we will hold an open teleconference for the Java Community to meet the candidates and ask questions regarding their nomination.  We hope you will be able to participate in the call.  Should the time be inconvenient, a recording will be made available for download, and candidate questions may be posted on this blog entry or sent to [email protected]. Topic: Meet the EC Candidates Date: Thursday, October 18, 2012 Time: 9:30 am, Pacific Daylight Time (San Francisco, GMT-07:00) Meeting Number: 807 818 225 Meeting Password: MeetEC ------------------------------------------------------- To join the online meeting (Now from mobile devices) ------------------------------------------------------- 1. Go to https://jcp.webex.com/jcp/j.php?ED=186721592&UID=0&PW=NMmUzNjY5ZTMw&RT=MiM0 2. If requested, enter your name and email address. 3. If a password is required, enter the meeting password: MeetEC 4. Click "Join". To view in other time zones or languages, please click the link: https://jcp.webex.com/jcp/j.php?ED=186721592&UID=0&PW=NMmUzNjY5ZTMw&ORT=MiM0 ------------------------------------------------------- To join the audio conference only -------------------------------------------------------     +1 (866) 682-4770     Outside the US: global access numbers  https://www.intercallonline.com/portlets/scheduling/viewNumbers/listNumbersByCode.do?confCode=6279803 or +1 (408) 774-4073     Conference code: 9454597     Security code: JCPEC (52732)------------------------------------------------------- For assistance ------------------------------------------------------- 1. Go to https://jcp.webex.com/jcp/mc 2. On the left navigation bar, click "Support".

    Read the article

  • Windows Azure Use Case: New Development

    - by BuckWoody
    This is one in a series of posts on when and where to use a distributed architecture design in your organization's computing needs. You can find the main post here: http://blogs.msdn.com/b/buckwoody/archive/2011/01/18/windows-azure-and-sql-azure-use-cases.aspx Description: Computing platforms evolve over time. Originally computers were directed by hardware wiring - that, the “code” was the path of the wiring that directed an electrical signal from one component to another, or in some cases a physical switch controlled the path. From there software was developed, first in a very low machine language, then when compilers were created, computer languages could more closely mimic written statements. These language statements can be compiled into the lower-level machine language still used by computers today. Microprocessors replaced logic circuits, sometimes with fewer instructions (Reduced Instruction Set Computing, RISC) and sometimes with more instructions (Complex Instruction Set Computing, CISC). The reason this history is important is that along each technology advancement, computer code has adapted. Writing software for a RISC architecture is significantly different than developing for a CISC architecture. And moving to a Distributed Architecture like Windows Azure also has specific implementation details that our code must follow. But why make a change? As I’ve described, we need to make the change to our code to follow advances in technology. There’s no point in change for its own sake, but as a new paradigm offers benefits to our users, it’s important for us to leverage those benefits where it makes sense. That’s most often done in new development projects. It’s a far simpler task to take a new project and adapt it to Windows Azure than to try and retrofit older code designed in a previous computing environment. We can still use the same coding languages (.NET, Java, C++) to write code for Windows Azure, but we need to think about the architecture of that code on a new project so that it runs in the most efficient, cost-effective way in a Distributed Architecture. As we receive new requests from the organization for new projects, a distributed architecture paradigm belongs in the decision matrix for the platform target. Implementation: When you are designing new applications for Windows Azure (or any distributed architecture) there are many important details to consider. But at the risk of over-simplification, there are three main concepts to learn and architect within the new code: Stateless Programming - Stateless program is a prime concept within distributed architectures. Rather than each server owning the complete processing cycle, the information from an operation that needs to be retained (the “state”) should be persisted to another location c(like storage) common to all machines involved in the process.  An interesting learning process for Stateless Programming (although not unique to this language type) is to learn Functional Programming. Server-Side Processing - Along with developing using a Stateless Design, the closer you can locate the code processing to the data, the less expensive and faster the code will run. When you control the network layer, this is less important, since you can send vast amounts of data between the server and client, allowing the client to perform processing. In a distributed architecture, you don’t always own the network, so it’s performance is unpredictable. Also, you may not be able to control the platform the user is on (such as a smartphone, PC or tablet), so it’s imperative to deliver only results and graphical elements where possible.  Token-Based Authentication - Also called “Claims-Based Authorization”, this code practice means instead of allowing a user to log on once and then running code in that context, a more granular level of security is used. A “token” or “claim”, often represented as a Certificate, is sent along for a series or even one request. In other words, every call to the code is authenticated against the token, rather than allowing a user free reign within the code call. While this is more work initially, it can bring a greater level of security, and it is far more resilient to disconnections. Resources: See the references of “Nondistributed Deployment” and “Distributed Deployment” at the top of this article for more information with graphics:  http://msdn.microsoft.com/en-us/library/ee658120.aspx  Stack Overflow has a good thread on functional programming: http://stackoverflow.com/questions/844536/advantages-of-stateless-programming  Another good discussion on Stack Overflow on server-side processing is here: http://stackoverflow.com/questions/3064018/client-side-or-server-side-processing Claims Based Authorization is described here: http://msdn.microsoft.com/en-us/magazine/ee335707.aspx

    Read the article

  • Integration Patterns with Azure Service Bus Relay, Part 3.5: Node.js relay

    - by Elton Stoneman
    This is an extension to Part 3 in the IPASBR series, see also: Integration Patterns with Azure Service Bus Relay, Part 1: Exposing the on-premise service Integration Patterns with Azure Service Bus Relay, Part 2: Anonymous full-trust .NET consumer Integration Patterns with Azure Service Bus Relay, Part 3: Anonymous partial-trust consumer In Part 3 I said “there isn't actually a .NET requirement here”, and this post just follows up on that statement. In Part 3 we had an ASP.NET MVC Website making a REST call to an Azure Service Bus service; to show that the REST stuff is really interoperable, in this version we use Node.js to make the secure service call. The code is on GitHub here: IPASBR Part 3.5. The sample code is simpler than Part 3 - rather than code up a UI in Node.js, the sample just relays the REST service call out to Azure. The steps are the same as Part 3: REST call to ACS with the service identity credentials, which returns an SWT; REST call to Azure Service Bus Relay, presenting the SWT; request gets relayed to the on-premise service. In Node.js the authentication step looks like this: var options = { host: acs.namespace() + '-sb.accesscontrol.windows.net', path: '/WRAPv0.9/', method: 'POST' }; var values = { wrap_name: acs.issuerName(), wrap_password: acs.issuerSecret(), wrap_scope: 'http://' + acs.namespace() + '.servicebus.windows.net/' }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); res.on('data', function (d) { var token = qs.parse(d.toString('utf8')); callback(token.wrap_access_token); }); }); req.write(qs.stringify(values)); req.end(); Once we have the token, we can wrap it up into an Authorization header and pass it to the Service Bus call: token = 'WRAP access_token=\"' + swt + '\"'; //... var reqHeaders = { Authorization: token }; var options = { host: acs.namespace() + '.servicebus.windows.net', path: '/rest/reverse?string=' + requestUrl.query.string, headers: reqHeaders }; var req = https.request(options, function (res) { console.log("statusCode: ", res.statusCode); console.log("headers: ", res.headers); response.writeHead(res.statusCode, res.headers); res.on('data', function (d) { var reversed = d.toString('utf8') console.log('svc returned: ' + d.toString('utf8')); response.end(reversed); }); }); req.end(); Running the sample Usual routine to add your own Azure details into Solution Items\AzureConnectionDetails.xml and “Run Custom Tool” on the .tt files. Build and you should be able to navigate to the on-premise service at http://localhost/Sixeyed.Ipasbr.Services/FormatService.svc/rest/reverse?string=abc123 and get a string response, going to the service direct. Install Node.js (v0.8.14 at time of writing), run FormatServiceRelay.cmd, navigate to http://localhost:8013/reverse?string=abc123, and you should get exactly the same response but through Node.js, via Azure Service Bus Relay to your on-premise service. The console logs the WRAP token returned from ACS and the response from Azure Service Bus Relay which it forwards:

    Read the article

  • System.Threading.ThreadAbortException executing WCF service

    - by SURESH GIRIRAJAN
    In one of our prod server we recently ran into issue when we went and update the web.config and try to browse the service. We started seeing the service was not responding and getting the following warning in the application log. Our service is WCF service, BizTalk orchestration exposed as service. We have other prod server where we never ran into this issue, so what’s different with this server. After going thru lot of forum and came up on some Microsoft service pack and hot fix which related to FCN. But I don’t want to apply any patch on this server then we need to do on all the other servers too. So solution is simple, I dropped the existing website, created a new site with different name with updated web.config browse the service. Then dropped that site and recreate the original web site and it worked fine without any issue. Event Viewer:  Event Type:        Warning Event Source:    ASP.NET 2.0.50727.0 Event Category:                Web Event Event ID:              1309 Date:                     6/6/2011 Time:                    5:41:42 PM User:                     N/A Computer:          PRODP02 Description: Event code: 3005 Event message: An unhandled exception has occurred. Event time: 6/6/2011 5:41:42 PM Event time (UTC): 6/6/2011 9:41:42 PM Event ID: a71769f42b304355a58c482bfec267f2 Event sequence: 3 Event occurrence: 1 Event detail code: 0  Application information:     Application domain: /LM/W3SVC/518296899/ROOT/PortArrivals-2-129518698821558995     Trust level: Full     Application Virtual Path: /TESTSVC     Application Path: D:\inetpub\wwwroot\RFID\TESTSVC\     Machine name: PRODP02  Process information:     Process ID: 8752     Process name: w3wp.exe     Account name: domain\BizTalk_Svc_Hostlso  Exception information:     Exception type: ThreadAbortException     Exception message: Thread was being aborted.  Request information:     Request URL: http://localhost:81/TESTSVC/TESTSVCS.svc     Request path: /TESTSVC/TESTSVCS.svc     User host address: 127.0.0.1     User:      Is authenticated: False     Authentication Type:      Thread account name: domain\BizTalk_Svc_Hostlso  Thread information:     Thread ID: 22     Thread account name: domain\BizTalk_Svc_Hostlso     Is impersonating: False     Stack trace:    at System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously)    at System.Web.HttpApplication.ApplicationStepManager.ResumeSteps(Exception error)  at System.Web.HttpApplication.System.Web.IHttpAsyncHandler.BeginProcessRequest(HttpContext context, AsyncCallback cb, Object extraData)    at System.Web.HttpRuntime.ProcessRequestInternal(HttpWorkerRequest wr)  <Description>Handling an exception.</Description> <AppDomain>/LM/W3SVC/518296899/ROOT/TESTSVC-6-129518741899334691</AppDomain> <Exception> <ExceptionType>System.Threading.ThreadAbortException, mscorlib, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089</ExceptionType> <Message>Thread was being aborted.</Message> <StackTrace> at System.Threading.Monitor.Enter(Object obj) at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath) at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath) at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest() at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state) at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke2() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ProcessCallbacks() at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.CompletionCallback(Object state) at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped) at System.ServiceModel.Diagnostics.Utility.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped) </StackTrace> <ExceptionString>System.Threading.ThreadAbortException: Thread was being aborted.    at System.Threading.Monitor.Enter(Object obj)    at System.ServiceModel.ServiceHostingEnvironment.HostingManager.EnsureServiceAvailable(String normalizedVirtualPath)    at System.ServiceModel.ServiceHostingEnvironment.EnsureServiceAvailableFast(String relativeVirtualPath)    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.HandleRequest()    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.BeginRequest()    at System.ServiceModel.Activation.HostedHttpRequestAsyncResult.OnBeginRequest(Object state)    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke2()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.WorkItem.Invoke()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ProcessCallbacks()    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.CompletionCallback(Object state)    at System.ServiceModel.Channels.IOThreadScheduler.CriticalHelper.ScheduledOverlapped.IOCallback(UInt32 errorCode, UInt32 numBytes, NativeOverlapped* nativeOverlapped)    at System.ServiceModel.Diagnostics.Utility.IOCompletionThunk.UnhandledExceptionFrame(UInt32 error, UInt32 bytesRead, NativeOverlapped* nativeOverlapped)</ExceptionString>

    Read the article

  • Inside Red Gate - Ricky Leeks

    - by Simon Cooper
    So, one of our profilers has a problem. Red Gate produces two .NET profilers - ANTS Performance Profiler (APP) and ANTS Memory Profiler (AMP). Both products help .NET developers solve problems they are virtually guaranteed to encounter at some point in their careers - slow code, and high memory usage, respectively. Everyone understands slow code - the symptoms are very obvious (an operation takes 2 hours when it should take 10 seconds), you know when you've solved it (the same operation now takes 15 seconds), and everyone understands how you can use a profiler like APP to help solve your particular problem. High memory usage is a much more subtle and misunderstood concept. How can .NET have memory leaks? The garbage collector, and how the CLR uses and frees memory, is one of the most misunderstood concepts in .NET. There's hundreds of blog posts out there covering various aspects of the GC and .NET memory, some of them helpful, some of them confusing, and some of them are just plain wrong. There's a lot of misconceptions out there. And, if you have got an application that uses far too much memory, it can be hard to wade through all the contradictory information available to even get an idea as to what's going on, let alone trying to solve it. That's where a memory profiler, like AMP, comes into play. Unfortunately, that's not the end of the issue. .NET memory management is a large, complicated, and misunderstood problem. Even armed with a profiler, you need to understand what .NET is doing with your objects, how it processes them, and how it frees them, to be able to use the profiler effectively to solve your particular problem. And that's what's wrong with AMP - even with all the thought, designs, UX sessions, and research we've put into AMP itself, some users simply don't have the knowledge required to be able to understand what AMP is telling them about how their application uses memory, and so they have problems understanding & solving their memory problem. Ricky Leeks This is where Ricky Leeks comes in. Created by one of the many...colourful...people in Red Gate, he headlines and promotes several tutorials, pages, and articles all with information on how .NET memory management actually works, with the goal to help educate developers on .NET memory management. And educating us all on how far you can push various vegetable-based puns. This, in turn, not only helps them understand and solve any memory issues they may be having, but helps them proactively code against such memory issues in their existing code. Ricky's latest outing is an interview on .NET Rocks, providing information on the Top 5 .NET Memory Management Gotchas, along with information on a free ebook on .NET Memory Management. Don't worry, there's loads more vegetable-based jokes where those came from...

    Read the article

  • Package system broken - E: Sub-process /usr/bin/dpkg returned an error code (1)

    - by delha
    After installing some packages and libraries I have an error on Package Manager, I can't run any update because it says: "The package system is broken If you are using third party repositories then disable them, since they are a common source of problems. Now run the following command in a terminal: apt-get install -f " I've tried to do what it says and it returns me: jara@jara-Aspire-5738:~$ sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following packages were automatically installed and are no longer required: libcaca-dev libopencv2.3-bin nite-dev python-bluez ps-engine libslang2-dev python-sphinx ros-electric-geometry-tutorials ros-electric-geometry-visualization python-matplotlib libzzip-dev ros-electric-orocos-kinematics-dynamics ros-electric-physics-ode libbluetooth-dev libaudiofile-dev libassimp2 libnetpbm10-dev ros-electric-laser-pipeline python-epydoc ros-electric-geometry-experimental libasound2-dev evtest python-matplotlib-data libyaml-dev ros-electric-bullet ros-electric-executive-smach ros-electric-documentation libgl2ps0 libncurses5-dev ros-electric-robot-model texlive-fonts-recommended python-lxml libwxgtk2.8-dev daemontools libxxf86vm-dev libqhull-dev libavahi-client-dev ros-electric-geometry libgl2ps-dev libcurl4-openssl-dev assimp-dev libusb-1.0-0-dev libopencv2.3 ros-electric-diagnostics-monitors libsdl1.2-dev libjs-underscore libsdl-image1.2 tipa libusb-dev libtinfo-dev python-tz python-sip libfltk1.1 libesd0 libfreeimage-dev ros-electric-visualization x11proto-xf86vidmode-dev python-docutils libvtk5.6 ros-electric-assimp x11proto-scrnsaver-dev libnetcdf-dev libidn11-dev libeigen3-dev joystick libhdf5-serial-1.8.4 ros-electric-joystick-drivers texlive-fonts-recommended-doc esound-common libesd0-dev tcl8.5-dev ros-electric-multimaster-experimental ros-electric-rx libaudio-dev ros-electric-ros-tutorials libwxbase2.8-dev ros-electric-visualization-common python-sip-dev ros-electric-visualization-tutorials libfltk1.1-dev libpulse-dev libnetpbm10 python-markupsafe openni-dev tk8.5-dev wx2.8-headers freeglut3-dev libavahi-common-dev python-roman python-jinja2 ros-electric-robot-model-visualization libxss-dev libqhull5 libaa1-dev ros-electric-eigen freeglut3 ros-electric-executive-smach-visualization ros-electric-common-tutorials ros-electric-robot-model-tutorials libnetcdf6 libjs-sphinxdoc python-pyparsing libaudiofile0 Use 'apt-get autoremove' to remove them. The following extra packages will be installed: libcv-dev The following NEW packages will be installed libcv-dev 0 upgraded, 1 newly installed, 0 to remove and 4 not upgraded. 2 not fully installed or removed. Need to get 0 B/3,114 kB of archives. After this operation, 11.1 MB of additional disk space will be used. Do you want to continue [Y/n]? y (Reading database ... 261801 files and directories currently installed.) Unpacking libcv-dev (from .../libcv-dev_2.1.0-7build1_amd64.deb) ... dpkg: error processing /var/cache/apt/archives/libcv-dev_2.1.0-7build1_amd64.deb (-- unpack): trying to overwrite '/usr/bin/opencv_haartraining', which is also in package libopencv2.3-bin 2.3.1+svn6514+branch23-12~oneiric dpkg-deb: error: subprocess paste was killed by signal (Broken pipe) Errors were encountered while processing: /var/cache/apt/archives/libcv-dev_2.1.0-7build1_amd64.deb E: Sub-process /usr/bin/dpkg returned an error code (1) I've tried everything people recommend on internet like: sudo apt-get clean sudo apt-get autoremove sudo apt-get update sudo apt-get upgrade sudo apt-get -f install Also I've tried to install the synaptic manager but it doesn't let me install anything.. As you can see nothing works so I'm desperate! I'm using ubuntu 11.10, 64 bits Thanks!!

    Read the article

  • Presentations & Training material OFM Summer Camps & Impressions & Feedback

    - by JuergenKress
    Thanks to all attendees who invested their time and utilized the opportunity to attend the Summer Camps! Due to high demand of our most of the trainings, we had a long waiting list with more numbers of partners who are keen to attend it. We would like to give our special thanks to all trainers, who delivered excellent workshops! Most of the presentations and course material have been posted on our SOA Community Workspace and WebLogic Community Workspace. You can access the content only if you are a registered community member. To register for the SOA Community please click here. You can register for the WebLogic Community here. To find out the first impressions of the event please visit our Facebook pages: www.facebook.com/WebLogicCommunity & www.facebook.com/soacommunity or Picasa Album Thanks for the excellent blog posts from AMIS Technology Blog & Middleware by Link Consulting. Let us know if you published a twitter blog on @soacommunity & @wlscommunity. We will be pleased to publish it in our Newsletters. WebLogic Course Quotes “Oracle trainings are the best” - Pedro Neto Novobas “Excellent training, well organized” - Pedro Antunh, Capgemini “This course dives you into Oracle WebLogic giving you a quick start on benefiting from Fusion Apps” - Leonardo Fernandes, Outsystems Additional Quotes “Thanks a lot again for organizing such a great and informative Summer Camp. Both training and networking were organized very professionally. I have gained tons of very useful Info, which will definitely help to increase quality of our future projects.” - Daniel Fasko fss-group.com “I didn’t get the chance yesterday to thank you for a most enjoyable and thoroughly educational time I had in Munich over the last few days.” - Jeroen Bakker Ordina “Just to congratulate you on a great event, not only today but also in the previous days of training. As we know, a very good organization and, as a native Portuguese that knows Lisbon very good, a nice choice of places to visit. Looking forward to come again next year.” Pedro Miguel Neto, Novobase. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: OFM Summer Camps,eduction,training,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Oracle VM Templates for EBS 12.1.3 for Exalogic Now Available

    - by Elke Phelps (Oracle Development)
    Oracle VM Templates for Oracle E-Business Suite 12.1.3 for x86 Exalogic Platform (64 bit) are now available on the Oracle Software Delivery Cloud.  The templates contain all the required elements to create an Oracle E-Business Suite R12 demonstration system on an Exalogic server. You can use these templates to quickly build an EBS 12.1.3 demonstration environment, bypassing the operating system and the software install (via the EBS Rapid Install).   The Oracle E-Business Suite Release 12.1.3 (64 bit) template for the Exalogic platform is a Oracle Virtual Server Guest template that contains a complete Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Installation.  For additional details, please refer to the following My Oracle Support Note: Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Template for Oracle Exalogic Platform (Note 1499132.1) The Oracle E-Business Suite system is installed on top of Oracle Linux Version 5 update 6. The templates have been optimized for performance, including OS kernel settings and E-Business Suite configuration settings tuned specifically for the Exalogic platform.  The configuration delivered with this template for a mid-tier running on Exalogic will support hundreds of concurrent users.  Please refer to Section 2: Performance Analysis in My Oracle Support Note 1499132.1 for additional details.   Additional Information The Oracle E-Business Suite VM templates for the Exalogic platform contain the following software versions: Operating System: Oracle Linux Version 5 Update 6 Oracle E-Business Suite 12.1.3 (Database Tier) Oracle E-Business Suite 12.1.3 (Application Tier) The following considerations were made when the Oracle E-Business Suite VM template for the Exalogic platform were designed: Templates use the hardware-virtualized architecture, supporting hardware with virtualization feature. Database Tier Template is configured to use the following configuration: 16 GB RAM 4 VCPUs 250 GB of Disk space for application installation Application Tier Template is configured to use the following configuration: 16 GB RAM 4 VCPUs 50 GB of Disk space for application installation References Oracle E-Business Suite Release 12.1.3 Database Tier and Application Tier Template for Oracle Exalogic Platform (Note 1499132.1) Related Articles Part 1: E-Business Suite 12.1.1 Templates for Oracle VM Now Available Part 2: Using Oracle VM with Oracle E-Business Suite Virtualization Kit Part 3: On Clouds and Virtualization in EBS Environments (OpenWorld 2009 Recap) Part 4: Deploying E-Business Suite on Amazon Web Services Elastic Compute Cloud Part 5: Live Migration of EBS Services Using Oracle VM Support Policies for Virtualization Technologies and Oracle E-Business Suite Virtualization and the E-Business Suite, Redux Virtualization and E-Business Suite

    Read the article

  • Is there a better term than "smoothness" or "granularity" to describe this language feature?

    - by Chris Stevens
    One of the best things about programming is the abundance of different languages. There are general purpose languages like C++ and Java, as well as little languages like XSLT and AWK. When comparing languages, people often use things like speed, power, expressiveness, and portability as the important distinguishing features. There is one characteristic of languages I consider to be important that, so far, I haven't heard [or been able to come up with] a good term for: how well a language scales from writing tiny programs to writing huge programs. Some languages make it easy and painless to write programs that only require a few lines of code, e.g. task automation. But those languages often don't have enough power to solve large problems, e.g. GUI programming. Conversely, languages that are powerful enough for big problems often require far too much overhead for small problems. This characteristic is important because problems that look small at first frequently grow in scope in unexpected ways. If a programmer chooses a language appropriate only for small tasks, scope changes can require rewriting code from scratch in a new language. And if the programmer chooses a language with lots of overhead and friction to solve a problem that stays small, it will be harder for other people to use and understand than necessary. Rewriting code that works fine is the single most wasteful thing a programmer can do with their time, but using a bazooka to kill a mosquito instead of a flyswatter isn't good either. Here are some of the ways this characteristic presents itself. Can be used interactively - there is some environment where programmers can enter commands one by one Requires no more than one file - neither project files nor makefiles are required for running in batch mode Can easily split code across multiple files - files can refeence each other, or there is some support for modules Has good support for data structures - supports structures like arrays, lists, and especially classes Supports a wide variety of features - features like networking, serialization, XML, and database connectivity are supported by standard libraries Here's my take on how C#, Python, and shell scripting measure up. Python scores highest. Feature C# Python shell scripting --------------- --------- --------- --------------- Interactive poor strong strong One file poor strong strong Multiple files strong strong moderate Data structures strong strong poor Features strong strong strong Is there a term that captures this idea? If not, what term should I use? Here are some candidates. Scalability - already used to decribe language performance, so it's not a good idea to overload it in the context of language syntax Granularity - expresses the idea of being good just for big tasks versus being good for big and small tasks, but doesn't express anything about data structures Smoothness - expresses the idea of low friction, but doesn't express anything about strength of data structures or features Note: Some of these properties are more correctly described as belonging to a compiler or IDE than the language itself. Please consider these tools collectively as the language environment. My question is about how easy or difficult languages are to use, which depends on the environment as well as the language.

    Read the article

< Previous Page | 458 459 460 461 462 463 464 465 466 467 468 469  | Next Page >