Search Results

Search found 10297 results on 412 pages for 'real tuty'.

Page 55/412 | < Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >

  • Oracle Database Upcoming Event dates to know

    - by mandy.ho
    February may be a short month, but it's not short of exciting Oracle events. From information packed "Real Performance Days" to participation in one of the biggest IT Security events - look out for Oracle Database and let us know if you are there with us! Feb 13-18, 2011 - Las Vegas, NV TDWI World Conference Series Join Oracle in highlighting Exadata x2-2 and x2-8, along with Oracle Business Intelligence, Enterprise Performance management and Data Warehousing solutions. Oracle will be presenting a workshop - Oracle Data Integration: Best-of-Breed Solutions for the Enterprise Wednesday, February 16, 2011 7p.m - 9p.m Glen Goodrich, Director of Product Management Christophe Dupupet, Director of Product Management, Data Integration http://events.tdwi.org/events/las-vegas-world-conference-2011/sessions/session-list.aspx Feb 14-17, 2011 - Barcelona, Spain Mobile World Congress MWC is an event where Oracle showcases the near complete breadth and depth of value that our Communications Industry strategy and Hardware and Software Solutions can deliver. Oracle supports Communications Service Providers today and delivers platforms and flexibility primed for the future. Oracle will have a two story Pavilion, along with an Oracle Java and Embedded Solutions Center - App Planet. The Exhibition times are Monday, 14th February 09.00 - 19.00 Tuesday, 15th February 09.00 - 19.00 Wednesday, 16th February 09.00 - 19.00 Thursday, 17th February 09.00 - 16.00 Have questions? Meet with Oracle Sales representatives at the Oracle Café. Open every day from 9am to 17:00pm. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=109912&src=6973382&src=6973382&Act=4 Feb 14-18, 2011 - San Francisco, CA RSA Conference As the world's most complete, open, integrated business software and hardware systems provider, Oracle can uniquely safeguard your information throughout its entire lifecycle. Learn more by attending these sessions: Cloud Computing: A Brave New World for Security and Privacy (CLD-201) Wednesday, February 16 at 8:30 a.m. Databases Under Attack - Securing Heterogeneous Database Infrastructures (DAS-301) Thursday, February 17, 2011 at 8:30 a.m. Seven Steps to Protecting Databases (DAS-402) Friday, February 18 at 10:10 a.m. RSA Conference Attendees will also have the opportunity to meet with Oracle Security Solution experts, see live product demos and more by visiting booth # 1559. Hours: Monday, February 14, 6:00 p.m. - 8:00 p.m., Tuesday, February 15, 11:00 a.m. - 6:00 p.m. and 4:30 p.m. - 6:00p.m., Wednesday, February 16, 11:00 a.m. - 6:00 p.m., and Thursday, February 17, 11:00 a.m. - 3:00 p.m. http://eventreg.oracle.com/webapps/events/ns/EventsDetail.jsp?p_eventId=127657&src=6967733&src=6967733&Act=12 Feb 21-25, 2011 - Various Locations IOUG Presents - A Day of Real World Performance with Tom Kyte, Andrew Holdsworth and Graham Wood These Oracle experts will debate, discuss and delineate the best practices for designing hardware architectures, deploying Oracle databases, and developing applications that deliver the fastest possible performance for your business.Topics are covered in a conversational format - with all three chiming in where appropriate. Each presenter has their own screen projector to demonstrate their individual points to the participants. Customers will have the opportunity to get their specific performance/tuning questions answered and learn how to balance all the different environmental requirements for their applications to improve performance. Register today for the following dates and locations • February 21 in San Diego, CA • February 22 in Los Angeles, CA • February 23 in Seattle, WA • February 25 in Phoenix, AZ http://www.ioug.org/tabid/194/Default.aspx Feb 8-24 - Various Oracle Enterprise Cloud Summit This series of full-day events with cloud experts, sharing real-world best practices, reference architectures and more continues during the month of February. Attend the Oracle Enterprise Cloud Summit to learn how to: • Build a state-of-the-art cloud architecture • Leverage your existing IT investments • Optimize your IT management processes Whether you are considering a move to cloud computing or have already adopted a cloud model, this event offers you the insights you need to take full advantage of cloud computing. Check below to see if the event is coming to a city near you. http://www.oracle.com/us/corporate/events/cloud-events-214342.html

    Read the article

  • Get Your Enterprise Working With Oracle On Track Communication 1.0

    - by Josh Lannin
    The On Track Development team is very pleased to announce that today On Track is available for our customers to download and evaluate.  To learn more about what On Track does start with our whitepaper and datasheet.   If you are a developer, take a look at our documentation and samples posted to our OTN page. For this first blog post, I’ll be speaking to several notable points about our product. Graceful Escalation via Conversations: On Track addresses the “Collaboration Problem” through a single guiding principle – graceful escalation – within the construct of a Conversation. In On Track, collaboration is based on a context (called a “Conversation”) that gracefully escalates in form, structure, and content, as dictated by the particular needs of a given collaboration.  Within that context, On Track provides a rich set of tools to choose from.  These tools provide for communication, coordination, content management, organization, decision making, and analysis -- all essential aspects of collaboration, but not all of them are essential all of the time.  Every collaborative interaction will evolve differently.  Some will evolve to represent work spreading over the course of years and involving a large, distributed team, while others may involve few people and not evolve at all.  Regardless, all collaborative contexts are built from the same parts, utilize the same concepts, and start the same way.  The principle of graceful escalation is that you only use the tools and structure you need; so you only incur the complexity you need. Purposeful Collaboration: Through application integration, On Track Conversations bring enterprise application users the communication and collaboration capabilities required to complete business process.  By association with specific processes or business objects conversations extend the possible interactions and broaden participation to internal or external non-application users and provide a sophisticated interaction experience, all the while enhancing the data set within the owning application.  Purposeful collaboration not only needs to happen in the context of applications, it must support a full range of real-time and long-running interactions to provide the greatest value. Multi Client, Multi Modal: This On Track 1.0 product release includes the same day availability of  multiple clients, including iPhone and iPad applications which are now available on the Apple Store, a fully capable and accessible Outlook Add-In, along with our browser web client.  With each client we have sought to leverage the strengths of each unique device- our iPhone client supports picture and voice posts, the iPad is optimized for meeting room situations and document viewing, and our Outlook add-in allows you to take emails in context and bring them into On Track.  In addition to supporting a diverse array of clients, On Track provides a unified multi modal experience support starting with basic messages moving through to integrated documents with live annotations, snapshots, application sharing, and voice. Next Generation Web Architecture: We believe On Track will help move the bar higher for what users can expect from all web applications, most notably ones that involve real-time activity.  On Track is built from the ground up with an innovative, real-time architecture that leverages the extensive push capabilities of our server.  Whether you are receiving a new message, viewing where crowds of people are collaborating, or doing live annotation on a document with a set of people, that information comes to you immediately without refreshes or moving back and forth between pages.  We’ve leveraged this core architecture across the product experience and raised the user experience bar for this type of application.  As well these capabilities are based on open standards and protocols, and are fully extensible by anyone- enabling sophisticated integrations to be created with a wide variety of both legacy and next-generation applications. Agile Product Development: As a product team we operate using continuous feedback and modified agile development methodologies.  We have thousands of active internal Oracle users who have helped pilot our product for critical business functions, and the On Track product development team uses our product as our primary vehicle for all our collaboration.  Additionally we been working with early access customers who are adopting our technology and providing us valuable feedback - which our process has rapidly realized in improvements to our software.  On Track agility extends to our server as well, which is built to scale, and is very simple to install and configure. We are pleased to make this product announcement and encourage you to join us on Facebook or follow us on Twitter, as well as checking back here for the latest product information.

    Read the article

  • Wisdom Lies in Collaborative Power and Intelligence

    - by kellsey.ruppel
    By Alakh Verma, Director, Platform Technology Solutions   In my recent blog posts, I shared insights on Predictive Analytics (Will Predictive Analytics at 'Speed of Thoughts' Help Businesses?), Real Time Decisions (How critical are Real Time decisions in business today?) and their significance in our lives in general and in businesses today. In the current business paradigm shift- with evolutionary social business, it is paramount that businesses look for wisdom in collaborative power and intelligence and equip their employees with the tools to engage with one another. There is an old time saying that 5 sticks tied together are stronger and unable to break as opposed to an individual stick. We have recently witnessed the power of ordinary people uniting together and fought collaboratively using Facebook and Twitter to topple down dictators in Tunisia, Egypt and Libya—and are threatening absolute rule in Syria. And an India one man’s (Anna Hazare) campaign against corruption went viral, bringing thousands to the streets in support. As anyone who has worked in a sizeable organization knows, there is no guarantee that the organization as a whole will perform efficiently and achieve its goals, even if each employee is individually efficient and every team has a high level of productivity. To achieve enterprise productivity, it is necessary not only for individuals and groups to “do things right” by working productively but also for the enterprise as a whole to “do the right things” - form the right teams, make the right decisions, allocate resources correctly, and effectively coordinate activities across the entire organization. Most organizations fall short of the optimal level of enterprise productivity because of one or more of these reasons, all at a great cost to the business.  They are disconnected from themselves with various parts of the organization unintentionally working at cross-purposes with each other.  Information that exists is not getting shared or reused.  Human talent is not being applied where it is most needed.  The same problems are being solved repeatedly by multiple groups. Intelligent collaboration through automated business processes has the ability to alter the course of any important business activity, with a potentially dramatic impact on the financial performance of the business. Whether it is a simple email exchange, a physical or virtual meeting, a task force, or a large-scale project, the activity is inherently collaborative.  In fact, collaboration can be defined as the work that takes place among people when a business process is not pre-determining how the work should take place. Collaboration is many things: information sharing, brainstorming, problem solving, best practice negotiation, innovation, coordination of activity, alignment of purpose, and so forth.  Collaboration is the “white space” between the business processes; it is the glue that holds an organization together, and the lubricant that allows the machinery to keep running.  Real time search and collaborative capabilities of the right people with the right content supported by defined processes will provide unparallel wisdom in the organization in the most competitive business environment today. Interestingly, technologies such as Oracle WebCenter offer these capabilities in our Web based business transactions and compliment in the overall collaborative intelligence and power to truly transform organizations to social businesses. Looking to learn more about engaging your employees to collaborate together and providing a complete user experience for your customers? You won't want to miss our webcast today! Drive Online Engagement with Intuitive Portals and Websites

    Read the article

  • Administer, manage, monitor, and fine tune the performance of your Oracle SOA Suite 11g Service Infrastructure and SOA composite applications.

    - by JuergenKress
    Key Features of the book If you are an Oracle SOA suite administrator, then this book is your bible. It gives you everything you need to know about all your tasks and help you to apply what you learn in your everyday life right from the first chapter. The book walks through promoting code across environments, performance tuning the service infrastructure, monitoring the environment, configuring security policies, managing the dehydration store, backing and restoring environments and so on. Packed with real-world examples from authors' own experiences, this books offers a unique insight into Oracle SOA Suite Administration. Detailed description The book begins with an introduction of SOA and quickly moves on to management of SOA composite applications. Readers will learn how to manage composite applications, their deployments and lifecycles. Equipped with this knowledge, readers will be introduced to monitoring and performance tuning SOA Suite, monitoring instances, messages, and composite applications, managing faults and exceptions, configuring audit levels of composite applications to include end-to-end monitoring through the use of extended logging as well as administering and configuring all SOA Suite components. A very important aspect of administration is tuning and optimizing the infrastructure for performance and book offers real work recommendations to monitor and performance tune service engines, the underlying WebLogic server, threads and timeouts, files systems, and composite applications. It also covers detailed administration of individual service components, configuring the infrastructure MBeans using both Oracle Enterprise Manager Fusion Middleware Control and WLST based scripts, migrating worklist preferences and BAM data across environments, setting up Email, LDAP and custom XPath. An administrator is always trusted with troubleshooting and root causing problems in the infrastructure and this book will help you through the troubleshooting approaches as how to identify faults and exception through extended logging and thread dumps and find solutions to common startup problems and deployment issues. The advanced contents of this book explains OWSM security framework and how to secure components deployed to the infrastructure along with the details of all groundwork needed to ready the environment. Last few chapters help you to understand and deal with managing the metadata services repository and dehydration store, backup and recovery and concluding with advanced topics such as silent/scripted installations, cloning, upgrading, patching and high availability installations. Packed with real-world examples, and tips straight from the trench; this book offers insights into SOA Suite administration that you will not find elsewhere. Part of our writing style in this book draws heavily on the philosophy of reuse and as such the book provide an ample of executable SQL queries and WLST scripts that administrators can reuse and extend to perform most of the administration tasks such as monitoring instances, processing times, instance states and perform automatic deployments, tuning, migration, and installation. These scripts are spread over each of the chapters in the book and can also be downloaded from here. The book is available in different formats at the following websites: Paperback and eBook versions & Kindle version. It is available for order and signed copies are available through our web site. SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: SOA book,SOA Suite Adminsitration,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • An Actionable Common Approach to Federal Enterprise Architecture

    - by TedMcLaughlan
    The recent “Common Approach to Federal Enterprise Architecture” (US Executive Office of the President, May 2 2012) is extremely timely and well-organized guidance for the Federal IT investment and deployment community, as useful for Federal Departments and Agencies as it is for their stakeholders and integration partners. The guidance not only helps IT Program Planners and Managers, but also informs and prepares constituents who may be the beneficiaries or otherwise impacted by the investment. The FEA Common Approach extends from and builds on the rapidly-maturing Federal Enterprise Architecture Framework (FEAF) and its associated artifacts and standards, already included to a large degree in the annual Federal Portfolio and Investment Management processes – for example the OMB’s Exhibit 300 (i.e. Business Case justification for IT investments).A very interesting element of this Approach includes the very necessary guidance for actually using an Enterprise Architecture (EA) and/or its collateral – good guidance for any organization charged with maintaining a broad portfolio of IT investments. The associated FEA Reference Models (i.e. the BRM, DRM, TRM, etc.) are very helpful frameworks for organizing, understanding, communicating and standardizing across agencies with respect to vocabularies, architecture patterns and technology standards. Determining when, how and to what level of detail to include these reference models in the typically long-running Federal IT acquisition cycles wasn’t always clear, however, particularly during the first interactions of a Program’s technical and functional leadership with the Mission owners and investment planners. This typically occurs as an agency begins the process of describing its strategy and business case for allocation of new Federal funding, reacting to things like new legislation or policy, real or anticipated mission challenges, or straightforward ROI opportunities (for example the introduction of new technologies that deliver significant cost-savings).The early artifacts (i.e. Resource Allocation Plans, Acquisition Plans, Exhibit 300’s or other Business Case materials, etc.) of the intersection between Mission owners, IT and Program Managers are far easier to understand and discuss, when the overlay of an evolved, actionable Enterprise Architecture (such as the FEA) is applied.  “Actionable” is the key word – too many Public Service entity EA’s (including the FEA) have for too long been used simply as a very highly-abstracted standards reference, duly maintained and nominally-enforced by an Enterprise or System Architect’s office. Refreshing elements of this recent FEA Common Approach include one of the first Federally-documented acknowledgements of the “Solution Architect” (the “Problem-Solving” role). This role collaborates with the Enterprise, System and Business Architecture communities primarily on completing actual “EA Roadmap” documents. These are roadmaps grounded in real cost, technical and functional details that are fully aligned with both contextual expectations (for example the new “Digital Government Strategy” and its required roadmap deliverables - and the rapidly increasing complexities of today’s more portable and transparent IT solutions.  We also expect some very critical synergies to develop in early IT investment cycles between this new breed of “Federal Enterprise Solution Architect” and the first waves of the newly-formal “Federal IT Program Manager” roles operating under more standardized “critical competency” expectations (including EA), likely already to be seriously influencing the quality annual CPIC (Capital Planning and Investment Control) processes.  Our Oracle Enterprise Strategy Team (EST) and associated Oracle Enterprise Architecture (OEA) practices are already engaged in promoting and leveraging the visibility of Enterprise Architecture as a key contributor to early IT investment validation, and we look forward in particular to seeing the real, citizen-centric benefits of this FEA Common Approach in particular surface across the entire Public Service CPIC domain - Federal, State, Local, Tribal and otherwise. Read more Enterprise Architecture blog posts for additional EA insight!

    Read the article

  • Library order is important

    - by Darryl Gove
    I've written quite extensively about link ordering issues, but I've not discussed the interaction between archive libraries and shared libraries. So let's take a simple program that calls a maths library function: #include <math.h int main() { for (int i=0; i<10000000; i++) { sin(i); } } We compile and run it to get the following performance: bash-3.2$ cc -g -O fp.c -lm bash-3.2$ timex ./a.out real 6.06 user 6.04 sys 0.01 Now most people will have heard of the optimised maths library which is added by the flag -xlibmopt. This contains optimised versions of key mathematical functions, in this instance, using the library doubles performance: bash-3.2$ cc -g -O -xlibmopt fp.c -lm bash-3.2$ timex ./a.out real 2.70 user 2.69 sys 0.00 The optimised maths library is provided as an archive library (libmopt.a), and the driver adds it to the link line just before the maths library - this causes the linker to pick the definitions provided by the static library in preference to those provided by libm. We can see the processing by asking the compiler to print out the link line: bash-3.2$ cc -### -g -O -xlibmopt fp.c -lm /usr/ccs/bin/ld ... fp.o -lmopt -lm -o a.out... The flag to the linker is -lmopt, and this is placed before the -lm flag. So what happens when the -lm flag is in the wrong place on the command line: bash-3.2$ cc -g -O -xlibmopt -lm fp.c bash-3.2$ timex ./a.out real 6.02 user 6.01 sys 0.01 If the -lm flag is before the source file (or object file for that matter), we get the slower performance from the system maths library. Why's that? If we look at the link line we can see the following ordering: /usr/ccs/bin/ld ... -lmopt -lm fp.o -o a.out So the optimised maths library is still placed before the system maths library, but the object file is placed afterwards. This would be ok if the optimised maths library were a shared library, but it is not - instead it's an archive library, and archive library processing is different - as described in the linker and library guide: "The link-editor searches an archive only to resolve undefined or tentative external references that have previously been encountered." An archive library can only be used resolve symbols that are outstanding at that point in the link processing. When fp.o is placed before the libmopt.a archive library, then the linker has an unresolved symbol defined in fp.o, and it will search the archive library to resolve that symbol. If the archive library is placed before fp.o then there are no unresolved symbols at that point, and so the linker doesn't need to use the archive library. This is why libmopt needs to be placed after the object files on the link line. On the other hand if the linker has observed any shared libraries, then at any point these are checked for any unresolved symbols. The consequence of this is that once the linker "sees" libm it will resolve any symbols it can to that library, and it will not check the archive library to resolve them. This is why libmopt needs to be placed before libm on the link line. This leads to the following order for placing files on the link line: Object files Archive libraries Shared libraries If you use this order, then things will consistently get resolved to the archive libraries rather than to the shared libaries.

    Read the article

  • The Future of Air Travel: Intelligence and Automation

    - by BobEvans
    Remember those white-knuckle flights through stormy weather where unexpected plunges in altitude result in near-permanent relocations of major internal organs? Perhaps there’s a better way, according to a recent Wall Street Journal article: “Pilots of a Honeywell International Inc. test plane stayed on their initial flight path, relying on the company's latest onboard radar technology to steer through the worst of the weather. The specially outfitted Boeing 757 barely shuddered as it gingerly skirted some of the most ferocious storm cells over Fort Walton Beach and then climbed above the rest in zero visibility.” Or how about the multifaceted check-in process, which might not wreak havoc on liver location but nevertheless makes you wonder if you’ve been trapped in some sort of covert psychological-stress test? Another WSJ article, called “The Self-Service Airport,” says there’s reason for hope there as well: “Airlines are laying the groundwork for the next big step in the airport experience: a trip from the curb to the plane without interacting with a single airline employee. At the airport of the near future, ‘your first interaction could be with a flight attendant,’ said Ben Minicucci, chief operating officer of Alaska Airlines, a unit of Alaska Air Group Inc.” And in the topsy-turvy world of air travel, it’s not just the passengers who’ve been experiencing bumpy rides: the airlines themselves are grappling with a range of challenges—some beyond their control, some not—that make profitability increasingly elusive in spite of heavy demand for their services. A recent piece in The Economist illustrates one of the mega-challenges confronting the airline industry via a striking set of contrasting and very large numbers: while the airlines pay $7 billion per year to third-party computerized reservation services, the airlines themselves earn a collective profit of only $3 billion per year. In that context, the anecdotes above point unmistakably to the future that airlines must pursue if they hope to be able to manage some of the factors outside of their control (e.g., weather) as well as all of those within their control (operating expenses, end-to-end visibility, safety, load optimization, etc.): more intelligence, more automation, more interconnectedness, and more real-time awareness of every facet of their operations. Those moves will benefit both passengers and the air carriers, says the WSJ piece on The Self-Service Airport: “Airlines say the advanced technology will quicken the airport experience for seasoned travelers—shaving a minute or two from the checked-baggage process alone—while freeing airline employees to focus on fliers with questions. ‘It's more about throughput with the resources you have than getting rid of humans,’ said Andrew O'Connor, director of airport solutions at Geneva-based airline IT provider SITA.” Oracle’s attempting to help airlines gain control over these challenges by blending together a range of its technologies into a solution called the Oracle Airline Data Model, which suggests the following steps: • To retain and grow their customer base, airlines need to focus on the customer experience. • To personalize and differentiate the customer experience, airlines need to effectively manage their passenger data. • The Oracle Airline Data Model can help airlines jump-start their customer-experience initiatives by consolidating passenger data into a customer data hub that drives realtime business intelligence and strategic customer insight. • Oracle’s Airline Data Model brings together multiple types of data that can jumpstart your data-warehousing project with rich out-of-the-box functionality. • Oracle’s Intelligent Warehouse for Airlines brings together the powerful capabilities of Oracle Exadata and the Oracle Airline Data Model to give you real-time strategic insights into passenger demand, revenues, sales channels and your flight network. The airline industry aside, the bullet points above offer a broad strategic outline for just about any industry because the customer experience is becoming pre-eminent in each and there is simply no way to deliver world-class customer experiences unless a company can capture, manage, and analyze all of the relevant data in real-time. I’ll leave you with two thoughts from the WSJ article about the new in-flight radar system from Honeywell: first, studies show that a single episode of serious turbulence can wrack up $150,000 in additional costs for an airline—so, it certainly behooves the carriers to gain the intelligence to avoid turbulence as much as possible. And second, it’s back to that top-priority customer-experience thing and the value that ever-increasing levels of intelligence can deliver. As the article says: “In the cabin, reporters watched screens showing the most intense parts of the nearly 10-mile wide storm, which churned some 7,000 feet below, in vibrant red and other colors. The screens also were filled with tiny symbols depicting likely locations of lightning and hail, which can damage planes and wreak havoc on the nerves of white-knuckle flyers.”  (Bob Evans is senior vice-president, communications, for Oracle.)  

    Read the article

  • Dont Throw Duplicate Exceptions

    In your code, youll sometimes have write code that validates input using a variety of checks.  Assuming you havent embraced AOP and done everything with attributes, its likely that your defensive coding is going to look something like this: public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new InvalidArgumentException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } Do you see a problem here?  Heres the deal Exceptions should be meaningful.  They have value at a number of levels: In the code, throwing an exception lets the develop know that there is an unsupported condition here In calling code, different types of exceptions may be handled differently At runtime, logging of exceptions provides a valuable diagnostic tool Its this last reason I want to focus on.  If you find yourself literally throwing the exact exception in more than one location within a given method, stop.  The stack trace for such an exception is likely going to be identical regardless of which path of execution led to the exception being thrown.  When that happens, you or whomever is debugging the problem will have to guess which exception was thrown.  Guessing is a great way to introduce additional problems and/or greatly increase the amount of time require to properly diagnose and correct any bugs related to this behavior. Dont Guess Be Specific When throwing an exception from multiple code paths within the code, be specific.  Virtually ever exception allows a custom message use it and ensure each case is unique.  If the exception might be handled differently by the caller, than consider implementing a new custom exception type.  Also, dont automatically think that you can improve the code by collapsing the if-then logic into a single call with short-circuiting (e.g. if(x == null || !x.IsValid()) ) that will guarantee that you cant easily throw different information into the message as easily as constructing the exception separately in each case. The code above might be refactored like so:   public void Foo(SomeClass someArgument) { if(someArgument == null) { throw new ArgumentNullException("someArgument"); } if(!someArgument.IsValid()) { throw new InvalidArgumentException("someArgument"); }   // Do Real Work } In this case its taking advantage of the fact that there is already an ArgumentNullException in the framework, but if you didnt have an IsValid() method and were doing validation on your own, it might look like this: public void Foo(SomeClass someArgument) { if(someArgument.Quantity < 0) { throw new InvalidArgumentException("someArgument", "Quantity cannot be less than 0. Quantity: " + someArgument.Quantity); } if(someArgument.Quantity > 100) { throw new InvalidArgumentException("someArgument", "SomeArgument.Quantity cannot exceed 100. Quantity: " + someArgument.Quantity); }   // Do Real Work }   Note that in this last example, Im throwing the same exception type in each case, but with different Message values.  Im also making sure to include the value that resulted in the exception, as this can be extremely useful for debugging.  (How many times have you wished NullReferenceException would tell you the name of the variable it was trying to reference?) Dont add work to those who will follow after you to maintain your application (especially since its likely to be you).  Be specific with your exception messages follow DRY when throwing exceptions within a given method by throwing unique exceptions for each interesting case of invalid state. Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • ????????????????

    - by Yusuke.Yamamoto
    ????????? ????????? ????????? ????·???? ?????:Oracle???????·??????(????)??:?????????????? Pickup!:IT???????????!|????????:100???????!|???????????? ????????:?Oracle DB?????????????????????Windows?VMware?? ?????:Oracle OpenWorld Tokyo 2012|JavaOne Tokyo 2012|??????:151 ?OTN ???? ?????? ????????:???????100???????! ?????????????????Amazon????Get! ???? Oracle Technology Network, ????/????, ??IT???????·?????????????? ???????/???MySQL?????? ?????? ???????/???MySQL????? ?????????Oracle VM VirtualBox????Oracle Real Application Clusters (RAC) 11g Release 2????? ???????/??????????????????????????????? ???????/?????????????/????·????????Flashback Database with SSD? ????? ????? Oracle?????????????????????????!???????????? Oracle Enterprise Manager 12c(EM12c):????????? ~????????~ ???????????!??????·??????? Oracle Linux 5.8?????????? ???DB?????Oracle SQL Developer 3.1???????????? Oracle??????(OUI:Oracle Universal Installer)???? ????? ???? ????????? ????????????·???????? ????????????? ???????? 11gR2 ?????????? Oracle Database 11g R2 Oracle WebLogic Server 11g R1 Oracle Enterprise Manager 11g R1 ????? ????????? Oracle Database ????????????????·?? ???????/???????????????? ?????????(????????, ???, etc) ????????(???, ?????, REDO, ???·????, etc) ????·?????????????????(?????, ???, ??, ??, ??? etc) ????·?????????(??, etc) ????????????? ???????????????????·?????? ??????? ???? ????????·??|SQL Server Windows Server ??????????PL/SQL|Java|.NET|PHP ??/??? ORACLE MASTER ???? DWH(?????????)??·?? ????? ?????(SAN, NAS, SSD, etc) Oracle Database ??????? Amazon EC2 Microsoft Excel Microsoft Visual Studio MSFC/MSCS(Microsoft Cluster Service) SAP ??·??????? Oracle Database Oracle Database 11g Release 2(11gR2) Oracle Database Standard Edition ????????: Advanced Compression ?????????: Advanced Security Application Express(APEX) Automatic Storage Management(ASM) SSD???Oracle???: Database Smart Flash Cache ??????????: Data Guard/Active Data Guard Data Pump Oracle Data Provider for .NET(ODP.NET) ????: Oracle Text Partitioning(???????/?????????) DB????: Real Application Clusters(RAC) Real Application Testing Recovery Manager(RMAN) SQL*Loader|SQL*Plus|Statspack ??????|????????|???????? Database Appliance Database Firewall Exadata Database Machine SQL Developer ?????DB: TimesTen In-Memory Database Oracle Fusion Middleware Java Oracle Coherence Oracle Data Integrator(ODI) Oracle GoldenGate Oracle JRockit JVM Oracle WebLogic Server Oracle Enterprise Manager for Database|for Middleware ????????????: Oracle Application Testing Suite Oracle Linux Oracle Solaris DTrace|ZFS|???/???? Oracle VM Server for x86 ?????? ???????? ?????????Oracle???????????????·????????????????? ?????????(??·??????) OTN??????(??????) ???????(????????) Oracle University(??) ??????! ?????... ????? ?????? ????? ?????? ?????|?Sun?? ???????? OTN???????? OTN(????) ?????? ???? OTN???|???? OTN?????? ??????? ?????? ???????? ???? ???????

    Read the article

  • Accessing Yahoo realtime stock quotes

    - by DVK
    There's a fairly easy way of retrieving 15-minute delayed quotes off of Yahoo! Finance web site ("quotes.csv" API). However, so far I was unable to find any info on how to access real-time quotes. The hang-ups with real-time quotes are: Only available to logged-in user No API Non-obvious how to scrape the info - I'm somewhat convinced they are placed on the page by some weird Ajax call. So I was wondering if anyone had managed to develop a publically available solution to retrieve real-time quotes for a stock from Yahoo! Finance. Notes: Implementation language/framework need is flexible but Perl or Excel is highly preferred. Assume that security is not an issue - I'm willing to supply yahoo userid and pasword, even in cleartext. I'm not 100% hung up on Yahoo - they are merely the only provider of free realtime stock quotes I'm familiar with. if the same thing can be done with Google Finance, I'd be just as happy. This is for a personal project, so scalability/fault tolerance/etc... are not important. I'm looking for a "do the whole retrieval" library ideally, but if I'm pointed to partial solutions (e.g. how to retrieve info from Yahoo's user-logged-in pages; how to scrape realtime quotes from Yahoo's page) I can fill in the blanks. I saw Finance::YahooQuote but it does not seem to allow you to supply log-in information and appears to use the lagging quotes.csv API Thanks!

    Read the article

  • JVM CMS Garbage Collecting Issues

    - by jlintz
    I'm seeing the following symptoms on an application's GC log file with the Concurrent Mark-Sweep collector: 4031.248: [CMS-concurrent-preclean-start] 4031.250: [CMS-concurrent-preclean: 0.002/0.002 secs] [Times: user=0.00 sys=0.00, real=0.00 secs] 4031.250: [CMS-concurrent-abortable-preclean-start] CMS: abort preclean due to time 4036.346: [CMS-concurrent-abortable-preclean: 0.159/5.096 secs] [Times: user=0.00 sys=0.01, real=5.09 secs] 4036.346: [GC[YG occupancy: 55964 K (118016 K)]4036.347: [Rescan (parallel) , 0.0641200 secs]4036.411: [weak refs processing, 0.0001300 secs]4036.411: [class unloading, 0.0041590 secs]4036.415: [scrub symbol & string tables, 0.0053220 secs] [1 CMS-remark: 16015K(393216K)] 71979K(511232K), 0.0746640 secs] [Times: user=0.08 sys=0.00, real=0.08 secs] The preclean process keeps aborting continously. I've tried adjusting CMSMaxAbortablePrecleanTime to 15 seconds, from the default of 5, but that has not helped. The current JVM options are as follows... Djava.awt.headless=true -Xms512m -Xmx512m -Xmn128m -XX:MaxPermSize=128m -XX:+HeapDumpOnOutOfMemoryError -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:BiasedLockingStartupDelay=0 -XX:+DoEscapeAnalysis -XX:+UseBiasedLocking -XX:+EliminateLocks -XX:+CMSParallelRemarkEnabled -verbose:gc -XX:+PrintGCTimeStamps -XX:+PrintGCDetails -XX:+PrintHeapAtGC -Xloggc:gc.log -XX:+CMSClassUnloadingEnabled -XX:+CMSPermGenPrecleaningEnabled -XX:CMSInitiatingOccupancyFraction=50 -XX:ReservedCodeCacheSize=64m -Dnetworkaddress.cache.ttl=30 -Xss128k It appears the concurrent-abortable-preclean never gets a chance to run. I read through http://blogs.sun.com/jonthecollector/entry/did_you_know which had a suggestion of enabling CMSScavengeBeforeRemark, but the side effects of pausing did not seem ideal. Could anyone offer up any suggestions? Also I was wondering if anyone had a good reference for grokking the CMS GC logs, in particular this line: [1 CMS-remark: 16015K(393216K)] 71979K(511232K), 0.0746640 secs] Not clear on what memory regions those numbers are referring to.

    Read the article

  • Attempting my first fortran 95 program, to solve quadratic eqn. Getting weird errors.

    - by Damon
    So, I'm attempting my first program in Fortran, trying to solve quadratic eqn. I have double and triple checked my code and don't see anything wrong. I keep getting "Invalid character in name at (1)" and "Unclassifiable statement at (1)" at various locations. Any help would be greatly appreciated... ! This program solves quadratic equations ! of the form ax^2 + bx + c = 0. ! Record: ! Name: Date: Notes: ! Damon Robles 4/3/10 Original Code PROGRAM quad_solv IMPLICIT NONE ! Variables REAL :: a, b, c REAL :: discrim, root1, root2, COMPLEX :: comp1, comp2 CHARACTER(len=1) :: correct ! Prompt user for coefficients. WRITE(*,*) "This program solves quadratic equations " WRITE(*,*) "of the form ax^2 + bx + c = 0. " WRITE(*,*) "Please enter the coefficients a, b, and " WRITE(*,*) "c, separated by commas:" READ(*,*) a, b, c WRITE(*,*) "Is this correct: a = ", a, " b = ", b WRITE(*,*) " c = ", c, " [Y/N]? " READ(*,*) correct IF correct = N STOP IF correct = Y THEN ! Definition discrim = b**2 - 4*a*c ! Calculations IF discrim > 0 THEN root1 = (-b + sqrt(discrim))/(2*a) root2 = (-b - sqrt(discrim))/(2*a) WRITE(*,*) "This equation has two real roots. " WRITE(*,*) "x1 = ", root1 WRITE(*,*) "x2 = ", root2 IF discrim = 0 THEN root1 = -b/(2*a) WRITE(*,*) "This equation has a double root. " WRITE(*,*) "x1 = ", root1 IF discrim < 0 THEN comp1 = (-b + sqrt(discrim))/(2*a) comp2 = (-b - sqrt(discrim))/(2*a) WRITE(*,*) "x1 = ", comp1 WRITE(*,*) "x2 = ", comp2 PROGRAM END quad_solv Thanks in advance!

    Read the article

  • slicing a 2d numpy array

    - by MedicalMath
    The following code: import numpy as p myarr=[[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6],[0,1],[0,6]] copy=p.array(myarr) p.mean(copy)[:,1] Is generating the following error message: Traceback (most recent call last): File "<pyshell#3>", line 1, in <module> p.mean(copy)[:,1] IndexError: 0-d arrays can only use a single () or a list of newaxes (and a single ...) as an index I looked up the syntax at this link and I seem to be using the correct syntax to slice. However, when I type copy[:,1] into the Python shell, it gives me the following output, which is clearly wrong, and is probably what is throwing the error: array([1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6, 1, 6]) Can anyone show me how to fix my code so that I can extract the second column and then take the mean of the second column as intended in the original code above? EDIT: Thank you for your solutions. However, my posting was an oversimplification of my real problem. I used your solutions in my real code, and got a new error. Here is my real code with one of your solutions that I tried: filteredSignalArray=p.array(filteredSignalArray) logical=p.logical_and(EndTime-10.0<=matchingTimeArray,matchingTimeArray<=EndTime) finalStageTime=matchingTimeArray.compress(logical) finalStageFiltered=filteredSignalArray.compress(logical) for j in range(len(finalStageTime)): if j == 0: outputArray=[[finalStageTime[j],finalStageFiltered[j]]] else: outputArray+=[[finalStageTime[j],finalStageFiltered[j]]] print 'outputArray[:,1].mean() is: ',outputArray[:,1].mean() And here is the error message that is now being generated by the new code: File "mypath\myscript.py", line 1545, in WriteToOutput10SecondsBeforeTimeMarker print 'outputArray[:,1].mean() is: ',outputArray[:,1].mean() TypeError: list indices must be integers, not tuple Second EDIT: This is solved now that I added: outputArray=p.array(outputArray) above my code. I have been at this too many hours and need to take a break for a while if I am making these kinds of mistakes.

    Read the article

  • iPad + OpenGL ES2. Why the Puzzling Virtual Memory Spike During Device Reorientation?

    - by dugla
    I've been spending the afternoon starring at Xcode Instruments memory monitor trying to decipher the following memory issue. I have a fullscreen OpenGL ES2 app running on iPad. I am fanatical about memory issues so my retains/releases are all nicely balanced. I closely monitor memory leaks. My app is basically squeeky clean. Except occassionally when I reorient the device. Portrait to Landscape. Back and forth I rock the device stress testing my discarding and rebuilding of the OpenGL framebuffer. The ambient memory footprint of my app is about 70MB Real Mems and 180MB Virtual Mems. Real memory hardly varies at all during device rotations. However the virtual mems reading sometimes briefly spikes up to 250MB and then recedes back to 180MB. No real pattern. But clearly related discarding/rebuilding the framebuffer. I see random memory warnings in my NSlogs but the app just hums along, no worries. 1) Since iPhone OS devices don't have VM could someone explain to me what the VM reading actually means? 2) My app totally leak free and generally bulletproof dispite the VM spikes. Never crashes. Rock solid. Should I be concerned about this? 3) There is clearly something happening in OpenGL framebuffer land that is causing this but I am using the API in the proper way: paraphrasing: Discarding the framebuffer: glDeleteRenderbuffers(1, &m_colorbuffer); glDeleteFramebuffers(1, &m_framebuffer); Rebuilding the framebuffer: glGenFramebuffers(1, &m_framebuffer); glGenRenderbuffers(1, &m_colorbuffer); Is there some other memory flushing trick I have missed? Thanks for any insight. Cheers, Doug

    Read the article

  • Syncronization Exception

    - by Kurru
    Hi I have two threads, one thread processes a queue and the other thread adds stuff into the queue. I want to put the queue processing thread to sleep when its finished processing the queue I want to have the 2nd thread tell it to wake up when it has added an item to the queue However these functions call System.Threading.SynchronizationLockException: Object synchronization method was called from an unsynchronized block of code on the Monitor.PulseAll(waiting); call, because I havent syncronized the function with the waiting object. [which I dont want to do, i want to be able to process while adding items to the queue]. How can I achieve this? Queue<object> items = new Queue<object>(); object waiting = new object(); 1st Thread public void ProcessQueue() { while (true) { if (items.Count == 0) Monitor.Wait(waiting); object real = null; lock(items) { object item = items.Dequeue(); real = item; } if(real == null) continue; .. bla bla bla } } 2nd Thread involves public void AddItem(object o) { ... bla bla bla lock(items) { items.Enqueue(o); } Monitor.PulseAll(waiting); }

    Read the article

  • Why is the Clojure Hello World program so slow compared to Java and Python?

    - by viksit
    Hi all, I'm reading "Programming Clojure" and I was comparing some languages I use for some simple code. I noticed that the clojure implementations were the slowest in each case. For instance, Python - hello.py def hello_world(name): print "Hello, %s" % name hello_world("world") and result, $ time python hello.py Hello, world real 0m0.027s user 0m0.013s sys 0m0.014s Java - hello.java import java.io.*; public class hello { public static void hello_world(String name) { System.out.println("Hello, " + name); } public static void main(String[] args) { hello_world("world"); } } and result, $ time java hello Hello, world real 0m0.324s user 0m0.296s sys 0m0.065s and finally, Clojure - hellofun.clj (defn hello-world [username] (println (format "Hello, %s" username))) (hello-world "world") and results, $ time clj hellofun.clj Hello, world real 0m1.418s user 0m1.649s sys 0m0.154s Thats a whole, garangutan 1.4 seconds! Does anyone have pointers on what the cause of this could be? Is Clojure really that slow, or are there JVM tricks et al that need to be used in order to speed up execution? More importantly - isn't this huge difference in performance going to be an issue at some point? (I mean, lets say I was using Clojure for a production system - the gain I get in using lisp seems completely offset by the performance issues I can see here). The machine used here is a 2007 Macbook Pro running Snow Leopard, a 2.16Ghz Intel C2D and 2G DDR2 SDRAM. BTW, the clj script I'm using is from here and looks like, #!/bin/bash JAVA=/System/Library/Frameworks/JavaVM.framework/Versions/1.6/Home/bin/java CLJ_DIR=/opt/jars CLOJURE=$CLJ_DIR/clojure.jar CONTRIB=$CLJ_DIR/clojure-contrib.jar JLINE=$CLJ_DIR/jline-0.9.94.jar CP=$PWD:$CLOJURE:$JLINE:$CONTRIB # Add extra jars as specified by `.clojure` file if [ -f .clojure ] then CP=$CP:`cat .clojure` fi if [ -z "$1" ]; then $JAVA -server -cp $CP \ jline.ConsoleRunner clojure.lang.Repl else scriptname=$1 $JAVA -server -cp $CP clojure.main $scriptname -- $* fi

    Read the article

  • Version control - stubs and mocks

    - by Tesserex
    For the sake of this question, I don't care about the difference between stubs, mocks, dummies, fakes, etc. Let's say I'm working on a project with one other person. I'm working on component A and he is working on component B. They work together, so I stub out B for testing, and he stubs out A. We're working in a DVCS, let's say Git, because that's actually the case here. When it comes time to merge our components together, we need to get the "real" files from my A and his B, but throw away all the fake stuff. During development, it's likely (unless I need to learn how to properly stub things) that the fakes have the same file names and class names as the real thing. So my question is: what is the proper procedure for doing version control on the fakes, and how are the components correctly merged, making sure to grab the real thing and not the fake? I would guess that one way is just do the merge, expect it to say CONFLICT, and then manually delete all the fake code out of the half-merged files. But this sounds tedious and inefficient. Should the fake things not go under VC at all? Should they be ripped out just before merging? Sorry if the answer to this should be obvious or trivial, I'm just looking for a "suggested practice" here.

    Read the article

  • How can I use PHP Mail() function within PHP-FPM? On Nginx?

    - by TheBlackBenzKid
    I have searched everywhere for this and I really want to resolve this. In the past I just end up using an SMTP service like SendGrid for PHP and a mailing plugin like SwiftMailer. However I want to use PHP. Basically my setup (I am new to server setup, and this is my personal setup following a tutorial) Nginx Rackspace Cloud PHP 5.3 PHP-FPM Ubuntu 11.04 My phpinfo() returns this about the Mail entries: mail.log no value mail.add_x_header On mail.force_extra_parameters no value sendmail_from no value sendmail_path /usr/sbin/sendmail -t -i SMTP localhost smtp_port 25 Can someone help me to as why Mail() will not work - my script is working on all other sites, it is a normal mail command. Do I need to setup logs or enable some PHP port on the server? My Sample script <? # FORMS VARS // $to = $customers_email; // $to = $customers_email; $to = $_GET["customerEmailFromForm"]; $subject = "Thank you for contacting Real-Domain.com"; $message = " <html> <head> </head> <body> Thanks, your message was sent and our team will be in touch shortly. <img src='http://cdn.com/emails/thank_you.jpg' /> </body> </html> "; $headers = "MIME-Version: 1.0" . "\r\n"; $headers .= "Content-type:text/html;charset=iso-8859-1" . "\r\n"; $headers .= 'From: <[email protected]>' . "\r\n"; // SEND MAIL mail($to,$subject,$message,$headers); ?> Thanks

    Read the article

  • How to combine twill and python into one code that could be run on "Google App Engine"?

    - by brilliant
    Hello everybody!!! I have installed twill on my computer (having previously installed Python 2.5) and have been using it recently. Python is installed on disk C on my computer: C:\Python25 And the twill folder (“twill-0.9”) is located here: E:\tmp\twill-0.9 Here is a code that I’ve been using in twill: go “some website’s sign-in page URL” formvalue 2 userid “my login” formvalue 2 pass “my password” submit go “URL of some other page from that website” save_html result.txt This code helps me to log in to one website, in which I have an account, record the HTML code of some other page of that website (that I can access only after logging in), and store it in a file named “result.txt” (of course, before using this code I firstly need to replace “my login” with my real login, “my password” with my real password, “some website’s sign-in page URL” and “URL of some other page from that website” with real URLs of that website, and number 2 with the number of the form on that website that is used as a sign-in form on that website’s log-in page) This code I store in “test.twill” file that is located in my “twill-0.9” folder: E:\tmp\twill-0.9\test.twill I run this file from my command prompt: python twill-sh test.twill Now, I also have installed “Google App Engine SDK” from “Google App Engine” and have also been using it for awhile. For example, I’ve been using this code: import hashlib m = hashlib.md5() m.update("Nobody inspects") m.update(" the spammish repetition ") print m.hexdigest() This code helps me transform the phrase “Nobody inspects the spammish repetition” into md5 digest. Now, how can I put these two pieces of code together into one python script that I could run on “Google App Engine”? Let’s say, I want my code to log in to a website from “Google App Engine”, go to another page on that website, record its HTML code (that’s what my twill code does) and than transform this HTML code into its md5 digest (that’s what my second code does). So, how can I combine those two codes into one python code? I guess, it should be done somehow by importing twill, but how can it be done? Can a python code - the one that is being run by “Google App Engine” - import twill from somewhere on the internet? Or, perhaps, twill is already installed on “Google App Engine”?

    Read the article

  • Microsoft Solver Foundation constraint

    - by emaster70
    Hello, I'm trying to use Microsoft Solver Foundation 2 to solve a fairly complicated situation, however I'm stuck with an UnsupportedModelException even when I dumb down the model as much as possible. Does anyone have an idea of what I'm doing wrong? Following is the least example required to reproduce the problematic behavior. var ctx = SolverContext.GetContext(); var model = ctx.CreateModel(); var someConstant = 1337.0; var decisionA = new Decision(Domain.Real, "decisionA"); var decisionB = new Decision(Domain.Real, "decisionB"); var decisionC = new Decision(Domain.Real, "decisionC"); model.AddConstraint("ca", decisionA <= someConstant); model.AddConstraint("cb", decisionB <= someConstant); model.AddConstraint("cc", decisionC <= someConstant); model.AddConstraint("mainConstraint", Model.Equal(Model.Sum(decisionA, decisionB, decisionC), someConstant)) model.AddGoal("myComplicatedGoal", GoalKind.Minimize, decisionC); var solution = ctx.Solve(); solution.GetReport().WriteTo(Console.Out); Console.ReadKey(); Please consider that my actual model should include, once complete, a few constraints in the form of a*a+b*a <= someValue, so if what I'm willing to do ultimately isn't supported, please let me know in advance. If that's the case I'd also appreciate a suggestion of some other solver with a .NET friendly interface that I could use (only well-known commercial packages, please). Thanks in advance

    Read the article

  • How to store a file on a server(web container) through a Java EE web application?

    - by Yatendra Goel
    I have developed a Java EE web application. This application allows a user to upload a file with the help of a browser. Once the user has uploaded his file, this application first stores the uploaded file on the server (on which it is running) and then processes it. At present, I am storing the file on the server as follows: try { FormFile formFile = programForm.getTheFile(); // formFile represents the uploaded file String path = getServlet().getServletContext().getRealPath("") + "/" + formFile.getFileName(); System.out.println(path); file = new File(path); outputStream = new FileOutputStream(file); outputStream.write(formFile.getFileData()); } where, the formFile represents the uploaded file. Now, the problem is that it is running fine on some servers but on some servers the getServlet().getServletContext().getRealPath("") is returning null so the final path that I am getting is null/filename and the file doesn't store on the server. When I checked the API for ServletContext.getRealPath() method, I found the following: public java.lang.String getRealPath(java.lang.String path) Returns a String containing the real path for a given virtual path. For example, the path "/index.html" returns the absolute file path on the server's filesystem would be served by a request for "http://host/contextPath/index.html", where contextPath is the context path of this ServletContext. The real path returned will be in a form appropriate to the computer and operating system on which the servlet container is running, including the proper path separators. This method returns null if the servlet container cannot translate the virtual path to a real path for any reason (such as when the content is being made available from a .war archive). So, Is there any other way by which I can store files on those servers also which is returning null for getServlet().getServletContext().getRealPath("")

    Read the article

  • order of onclick events

    - by NicoF
    I want to add a jquery click event to href's that already have an onclick event. In the example below the hard coded onclick event will get triggered first. How can I reverse that? <a class="whatever clickyGoal1" href="#" onclick="alert('trial game');">play trial</a> <br /> <a class="whatever clickyGoal2" href="#" onclick="alert('real game');">real trial</a> <p class="goal1"> </p> <p class="goal2"> </p> <script type="text/javascript"> $("a.clickyGoal1").click(function() { $("p.goal1").append("trial button click goal is fired");//example }); $("a.clickyGoal2").click(function() { $("p.goal2").append("real button click goal is fired"); //example }); </script>

    Read the article

  • Libraries and pseudocode for physical Dashboard/Status board

    - by dani
    OK, so I bought a 46" screen for the office yesterday, and with the imminent risk of being accused for setting up an "elaborate World Cup procrastination scheme", I'd better show my colleagues what it's meant for ;) Looking at my simple sketch, and at these great projects from which I was inspired, I would like to get some input on the following: Pseudocode for the skeleton: As some methods should be called every 24 hours ("Today's date in the heading"), others at 60 second intervals ("Twitter results"), what would be a good approach using JavaScript (jQuery) and PHP? EDIT: Alsciende: I can agree that #1 and #8 are too vague. Therefore I remove #8 and try to clarify #1: With "Pseudocode for the skeleton", I basically mean could this be done entirely using JavaScript timers and how would you set up the various timers? Library for Google Analytics: Which libraries support the Google Analytics API and can produce neat charts. Preferably HTML5, JavaScript-based like Protovis. Library for Twitter: Which libraries would you recommend for fetching twitter search results and latest tweets from profiles. Libraries for Typography/CSS/HTML5: Trying to learn some HTML5 etc. in the process, please advice on any other typography/css libraries that could be of relevance. Scraping/Parsing? I'll give you a concrete example: Trying to fetch today's menu from this restaurant's website, how would you go about? (it's in Swedish - but you get the point - sorry ;) ) Real-time stats? I'm using the WassUp-plugin for WordPress to track real-time visitors on our website. Other logging software (AWStats etc.) is probably also installed on the webserver. Any ideas on how to extract information from these and present in real-time on the dashboard? Browser choice? Which Browser and OS would you pick? Stable, Full-screen, HTML5.

    Read the article

  • Wait on multiple condition variables on Linux without unnecessary sleeps?

    - by Joseph Garvin
    I'm writing a latency sensitive app that in effect wants to wait on multiple condition variables at once. I've read before of several ways to get this functionality on Linux (apparently this is builtin on Windows), but none of them seem suitable for my app. The methods I know of are: Have one thread wait on each of the condition variables you want to wait on, which when woken will signal a single condition variable which you wait on instead. Cycling through multiple condition variables with a timed wait. Writing dummy bytes to files or pipes instead, and polling on those. #1 & #2 are unsuitable because they cause unnecessary sleeping. With #1, you have to wait for the dummy thread to wake up, then signal the real thread, then for the real thread to wake up, instead of the real thread just waking up to begin with -- the extra scheduler quantum spent on this actually matters for my app, and I'd prefer not to have to use a full fledged RTOS. #2 is even worse, you potentially spend N * timeout time asleep, or your timeout will be 0 in which case you never sleep (endlessly burning CPU and starving other threads is also bad). For #3, pipes are problematic because if the thread being 'signaled' is busy or even crashes (I'm in fact dealing with separate process rather than threads -- the mutexes and conditions would be stored in shared memory), then the writing thread will be stuck because the pipe's buffer will be full, as will any other clients. Files are problematic because you'd be growing it endlessly the longer the app ran. Is there a better way to do this? Curious for answers appropriate for Solaris as well.

    Read the article

  • Atomic swap in GNU C++

    - by Steve
    I want to verify that my understanding is correct. This kind of thing is tricky so I'm almost sure I am missing something. I have a program consisting of a real-time thread and a non-real-time thread. I want the non-RT thread to be able to swap a pointer to memory that is used by the RT thread. From the docs, my understanding is that this can be accomplished in g++ with: // global Data *rt_data; Data *swap_data(Data *new_data) { #ifdef __GNUC__ // Atomic pointer swap. Data *old_d = __sync_lock_test_and_set(&rt_data, new_data); #else // Non-atomic, cross your fingers. Data *old_d = rt_data; rt_data = new_data; #endif return old_d; } This is the only place in the program (other than initial setup) where rt_data is modified. When rt_data is used in the real-time context, it is copied to a local pointer. For old_d, later on when it is sure that the old memory is not used, it will be freed in the non-RT thread. Is this correct? Do I need volatile anywhere? Are there other synchronization primitives I should be calling? By the way I am doing this in C++, although I'm interested in whether the answer differs for C. Thanks ahead of time.

    Read the article

< Previous Page | 51 52 53 54 55 56 57 58 59 60 61 62  | Next Page >