Search Results

Search found 15373 results on 615 pages for 'delphi rulez 2010'.

Page 562/615 | < Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >

  • MPI Cluster Debugger launch integration in VS2010

    Let's assume that you have all the HPC bits installed and that you have existing MPI code (or you created a "Hello World" project using the MPI project template). Of course, you create a single MPI application and at runtime it will correspond to multiple processes (of the same app) launched on multiple nodes (i.e. machines) on the cluster. So how do you debug such a situation by simply hitting the familiar "F5" keystroke (i.e. Debug - Start Debugging)?WATCH IT INSTEAD OF READING ABOUT ITIf you can't bear to read through all the details below, just watch this 19-minute screencast explaining this VS2010 feature. Alternatively, or even additionally, keep on reading.REQUIREMENTWhen you debug an MPI application, you would want the copying of resources from your client machine (where Visual Studio is installed) to each compute node (where Windows HPC Server is installed) to take place automatically for you. 'Resources' in the previous sentence includes your application binary, plus any binary or data dependencies it may have, plus PDBs if needed, plus the debug CRT of the correct bitness, plus msvsmon for remote debugging to work. You would also want, after copying is complete, to have your app and msvsmon launched and attached so that you can hit breakpoints back in Visual Studio on your client machine. All these thing that you would want are delivered in VS2010.STEPS TO F51. In your MPI project where you have placed a breakpoint go to Project Properties - Configuration Properties - Debugging. Ensure the "Debugger to launch" combo box value is set to MPI Cluster Debugger.2. There are a whole bunch of properties here and typically you can ignore all of them except one: Run Environment. By default it is set to run 1 process on your local machine and if you change the number after that to, for example, 4 it will launch 4 processes of your app on your local machine.You want this to run on your cluster though, so go to the dropdown arrow at the end of the Run Environment cell and open it to expose the "Edit Hpc node" menu which opens the Node Selector dialog:In this dialog you can enter (or pick from a list) the cluster head node name and then the number of processes you want to execute on the cluster and then hit OK and… you are done.3. Press F5 and watch your breakpoint get hit (after giving it some time for copying, remote execution, attachment and symbol resolution to take place).GOING DEEPERIn the MPI Cluster Debugger project properties above, you can see many additional properties to the Run Environment. They are all optional, but you may want to understand them in order to fine tune your cluster debugging. Read all about each one of these on the MSDN page Configuration Properties for the MPI Cluster Debugger.In the Node Selector dialog above you can see more options than just the Head Node name and Number of Process to run. They should be self-explanatory but I also cover them in depth in my screencast showing you an example of why you would choose to schedule processes per core versus per node. You can also read about these options on MSDN as part of the page How to: Configure and Launch the MPI Cluster Debugger.To read through an example that touches on MPI project creation, project properties, node selector, and also usage of MPI with OpenMP plus MPI with PPL, read the MSDN page Walkthrough: Launching the MPI Cluster Debugger in Visual Studio 2010.Happy MPI debugging! Comments about this post welcome at the original blog.

    Read the article

  • Top 5 Reasons to Invest in Enterprise 2.0 Technologies

    - by kellsey.ruppel(at)oracle.com
    In 2010, Oracle's portal, content management, and collaboration solutions evolved rapidly, supported by increasingly deep integrations across Oracle Fusion Middleware and the entire Oracle stack. In light of these developments, we asked Vince Casarez, vice president of Enterprise 2.0 product management, for his top five reasons to invest in Enterprise 2.0 (E2.0) technologies--including real-world examples of businesses already realizing the benefits of next-generation E2.0 technologies. 1. Provide a modern user experience As E2.0 technologies gain widespread adoption, customers and employees expect intuitive Web experiences that are both interactive and community-based. By partnering with Oracle, Alcatel-Lucent Enterprise Group is already making that happen. With 76,000 employees and operations in more than 100 countries, the company wanted a streamlined, personalized user experience with more relevant content in fewer clicks. Working with Oracle, they created a global support portal that supports personalization and integration with Oracle Business Intelligence Enterprise Edition and Oracle E-Business Suite--and drives collaboration with tools such as wikis, blogs, and forums. Learn more about Alcatel-Lucent Enterprise Group's Global Support Portal in this Webcast. 2. Improve productivity and collaboration As E2.0 technologies mature, Oracle anticipates companies moving beyond the idea of simply creating yet another Facebook-like destination for its employees, and instead shaping work environments around specific business tasks. After rapid growth--both organic and through acquisition--construction and infrastructure services leader Balfour Beatty found itself with multiple homegrown intranet sites with very minimal content-sharing capabilities. Today, thanks to Oracle WebCenter Suite, Oracle WebCenter Spaces, Oracle WebCenter Services, and Oracle Universal Content Management, Balfour Beatty is benefiting from collaborative workspaces, a central place to use and work with documents, and unified search across content. 3. Leverage business processes and applications Modern portals are now able to integrate users, content, and business processes in unprecedented ways. To take advantage of these new possibilities, leading dairy provider Land O'Lakes has implemented a fully integrated ERP solution together with Oracle's ECM platform. As a result, Land O'Lakes has been able to achieve better information management and compliance, increased adoption rates for enterprise tools, and increased business process efficiency thanks to more effective information sharing and collaboration. 4. Enhance customer and supplier relationships Companies have begun to move beyond the idea that E2.0 simply means enabling customer reviews or embedding chat functionality. They are taking E2.0 to the next level and providing interactive experiences for their customers. For example, to enhance customer and supplier relationships, Wind River, a global leader in device software optimization, successfully partnered with Oracle to: Integrate ERP and ECM content to provide customers the latest and most relevant support information for products they own Enable customers to personalize their support experience and receive updates regarding patches, application notes, and other relevant content Enable discussions, wikis, and blogs for more efficient collaboration 5. Increase business visibility and responsiveness By strategically embedding collaboration and communication tools into specific business contexts, companies significantly increase visibility into changing business conditions--and can respond much more agilely. Texas A&M University System--one of the largest systems of higher education in the U.S.--partnered with Oracle to create a unified repository that would enable the retrieval of research and grant data from disparate systems via an Enterprise 2.0 user interface. By enabling researchers to customize their own portals with easy-to-use tools, they have also been able to significantly reduce their reliance on the IT department. Learn how other Oracle customers are leveraging Enterprise 2.0 technologies.

    Read the article

  • Snap App Windows to Pre-Defined Screen Sections with Acer GridVista

    - by Asian Angel
    The window snapping feature in Windows 7 and the ability to organize monitor(s) into specific gridded sections have both become popular lately. If you love the idea of having both combined in a single software then join us as we look at Acer GridVista. Note: Acer GridVista works with Windows XP, Vista, & 7. It will also work with dual monitors. Setup Acer GridVista comes in a zip file format and at first you might assume that it is portable in nature but it is not. Once you unzip the enclosed folder you will need to double click on “Setup.exe” to install the program. Acer GridVista in Action Once you have installed the program and started it up all that you will notice at first is the new “System Tray Icon”. Here you can see the “Context Menu”… The only menu command that you will likely use most of the time is the “Grid Configuration Command”. Notice that for our single monitor setup that it lists “Display 1”. The “Single Setting” is enabled by default and you can easily choose the layout that best suits your needs. The enabled layout style will always be highlighted in yellow for easy reference. For our example we chose the “Triple (primary at right)” layout style. Each section will be specifically numbered as shown here. Do not worry…the grid and numbers only appear for a moment and then become invisible again until you move an app window into that section/area of your screen. On every regular app window that you open you will notice three new buttons in the upper right corner. Here is what each of these new buttons do: Acer GridVista Extensions (Transparent, Send To Window Grid, About Acer GridVista): Viewable in a drop-down menu Lock To Grid (Enable/Disable): Enabled by default –> Note: Set to disable on a particular window to keep it free of the “grid locking function” Always On Top (Enable/Disable): Disabled by default A good look at the “Extensions Drop-Down Menu” where you can set an app window to be transparent or send it to a specific screen section on your monitor(s). If you open an app it will not automatically lock into a specific section. To lock the window into a specific section drag-and-drop the app window into the desired section. Notice the red outline and highlighted number on “Section 2” below. The red outline and highlighted number serves as an indicator that if you release the app window at that moment it will lock into the outlined/highlighted section. Now that Notepad is locked into “Section 2” you can see that it is maximized within that section. Continue to drag-and-drop your app windows into the appropriate sections as desired…apps can still be reduced to the “Taskbar” the same as before. Options These are the options available for Acer GridVista… Conclusion If you have been wanting the ability to “snap” windows and organize them into specific screen areas then Acer GridVista is definitely a program that you should try out. Links Download Acer GridVista at Softpedia View detailed information at the Acer GridVista Homepage Similar Articles Productive Geek Tips Multitask Like a Pro with AquaSnapHelp Troubleshoot the Blue Screen of Death by Preventing Automatic RebootAdd Windows 7’s AeroSnap Feature to Vista and XPResize Windows to Specific Dimensions Easily With SizerKeyboard Ninja: Assign a Hotkey to any Window TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Playing Games In Chrome Made Easier Stop In The Name Of Love (Firefox addon) Chitika iPad Labs Gives Live iPad Sale Stats Heaven & Hell Finder Icon Using TrueCrypt to Secure Your Data Quickly Schedule Meetings With NeedtoMeet

    Read the article

  • ReSharper 8.0 EAP now available

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/28/resharper-8.0-eap-now-available.aspxJetbrains have just released |ReSharper 8.0 Beta on their Early Access |Programme at http://www.jetbrains.com/resharper/whatsnew/?utm_source=resharper8b&utm_medium=newsletter&utm_campaign=resharper&utm_content=customersResharper 8.0 comes with the following new features:Support for Visual Studio 2013 Preview. Yes, ReSharper is known to work well with the fresh preview of Visual Studio 2013, and if you have already started digging into it, ReSharper 8.0 Beta is ready for the challenge.Faster code fixes. Thanks to the new Fix in Scope feature, you can choose to batch-fix some of the code issues that ReSharper detects in the scope of a project or the whole solution. Supported fixes include removing unused directives and redundant casts.Project dependency viewer. ReSharper is now able to visualize a project dependency graph for a bird's eye view of dependencies within your solution, all without compiling anything!Multifile templates. ReSharper's file templates can now be expanded to generate more than one file. For instance, this is handy for generating pairs of a main logic class and a class for extensions, or sets of partial files.Navigation improvements. These include a new action called Go to Everything to let you search for a file, type or method name from the same input box; support for line numbers in navigation actions; a new tool window called Assembly Explorer for browsing through assemblies; and two more contextual navigation actions: Navigate to Generic Substitutions and Navigate to Assembly Explorer.New solution-wide refactorings. The set of fresh refactorings is headlined by the highly requested Move Instance Method to move methods between classes without making them static. In addition, there are Inline Parameter and Pull Parameter. Last but not least, we're also introducing 4 new XAML-specific refactorings!Extraordinary XAML support. A plethora of new and improved functionality for all developers working with XAML code includes dedicated grid inspections and quick-fixes; Extract Style, Extract, Move and Inline Resource refactorings; atomic renaming of dependency properties; and a lot more.More accessible code completion. ReSharper 8 makes more of its IntelliSense magic available in automatic completion lists, including extension methods and an option to import a type. We're also introducing double completion which gives you additional completion items when you press the corresponding shortcut for the second time.A new level of extensibility. With the new NuGet-based Extension Manager, discovering, installing and uninstalling ReSharper extensions becomes extremely easy in Visual Studio 2010 and higher. When we say extensions, we mean not only full-fledged plug-ins but also sets of templates or SSR patterns that can now be shared much more easily.CSS support improvements. Smarter usage search for CSS attributes, new CSS-specific code inspections, configurable support for CSS3 and earlier versions, compatibility checks against popular browsers - there's a rough outline of what's new for CSS in ReSharper 8.A command-line version of ReSharper. ReSharper 8 goes beyond Visual Studio: we now provide a free standalone tool with hundreds of ReSharper inspections and additionally a duplicate code finder that you can integrate with your CI server or version control system.Multiple minor improvements in areas such as decompiling and code formatting, as well as support for the Blue Theme introduced in Visual Studio 2012 Update 2.

    Read the article

  • Process for Securing Web Sites and Applications

    - by Aamir Hasan
    The following quick-start guide provides a detailed overview of how to configure security for IIS 6.0. Reduce the Attack Surface of the Web Server 1.       Enable only essential Windows Server 2003 components and services. 2.       Enable only essential IIS 6.0 components and services. 3.       Enable only essential Web service extensions. 4.       Enable only essential Multipurpose Internet Mail Extensions (MIME) types. 5.       Configure Windows Server 2003 security settings. Prevent Unauthorized Access to Web Sites and Applications 1.       Store content on a dedicated disk volume. 2.       Set IIS Web site permissions. 3.       Set IP address and domain name restrictions. 4.       Set the NTFS file system permissions. Isolate Web Sites and Applications 1.       Evaluate the effects of impersonation on application compatibility: 2·         Identify the impersonation behavior for ASP applications. 3·         Select the impersonation behavior for ASP.NET applications. 4.       Configure Web sites and applications for isolation. Configure User Authentication 1.       Configure Web site authentication. 2·         Select the Web site authentication method. 3·         Configure the Web site authentication method. 4.       Configure File Transfer Protocol (FTP) site authentication. Encrypt Confidential Data Exchanged with Clients 1.       Use Secure Sockets Layer (SSL) to encrypt confidential data. 2.       Use Internet Protocol security (IPSec) or virtual private network (VPN) with remote administration. Maintain Web Site and Application Security 1.       Obtain and apply current security patches. 2.       Enable Windows Server 2003 security logs. 3.       Enable file access auditing for Web site content. 4.       Configure IIS logs. 5.       Review security policies, processes, and procedures.  Note:To secure the Web sites and applications in a Web farm, use the process described in this chapter to configure security for each server in the Web farm. Link:http://www.studentacad.com/post/2010/04/28/Process-for-Securing-Web-Sites-and-Applications.aspx

    Read the article

  • Two Weeks To Go, Still Time to Register

    - by speakjava
    Yes, it's now only two weeks to the start of the 17th JavaOne conference! This will be my ninth JavaOne, I came fairly late to this event, attending for the first time in 2002.  Since then I've missed two conferences, 2006 for the birth of my son (a reasonable excuse I think) and 2010 for reasons we'll not go into here.  I have quite the collection of show devices, I've still got the WoWee robot, the HTC phone for JavaFX, the programmable pen and the Sharp Zaurus.  The only one I didn't keep was the homePod music player (I wonder why?) JavaOne is a special conference for many reasons, some of which I list here: A great opportunity to catch up on the latest changes in the Java world.  This is not just in terms of the platform, but as much about what people are doing with Java to build new and cool applications. A chance to meet people.  We have these things called BoFs, which stands for "Birds of a Feather", as in "Birds of a feather, flock together".  The idea being to have sessions where people who are interested in the same topic don't just get to listen to a presentation, but get to talk about it.  These sessions are great, but I find that JavaOne is as much about the people I meet in the corridors and the discussions I have there as it is about the sessions I get to attend. Think outside the box.  There are a lot of sessions at JavaOne covering the full gamut of Java technologies and applications.  Clearly going to sessions that relate to your area of interest is great, but attending some of the more esoteric sessions can often spark thoughts and stimulate the imagination to go off and do new and exciting things once you get back. Get the lowdown from the Java community.  Java is as much about community as anything else and there are plenty of events where you can get involved.  The GlassFish party is always popular and for Java Champions and JUG leaders there's a couple of special events too. Not just all hard work.  Oracle knows how to throw a party and the appreciation event will be a great opportunity to mingle with peers in a more relaxed environment.  This year Pearl Jam and Kings of Leon will be playing live.  Add free beer and what more could you want? So there you have it.  Just a few reasons for why you want to attend JavaOne this year.  Oh, and of course I'll be presenting three sessions which is even more reason to go.  As usual I've gone for some mainstream ("Custom Charts" for JavaFX) and some more 'out there' ("Java and the Raspberry Pi" and "Gestural Interfaces for JavaFX").  Once again I'll be providing plenty of demos so more than half my luggage this year will consist of a Kinect, robot arm, Raspberry Pis, gamepad and even an EEG sensor. If you're a student there's one even more attractive reason for going to JavaOne: It's Free! Registration is here.  Hope to see you there!

    Read the article

  • Finalists for Community Manager of the Year Announced

    - by Mike Stiles
    For as long as brand social has been around, there’s still an amazing disparity from company to company on the role of Community Manager. At some brands, they are the lead social innovators. At others, the task has been relegated to interns who are at the company temporarily. Some have total autonomy and trust. Others must get chain-of-command permission each time they engage. So what does a premiere “worth their weight in gold” Community Manager look like? More than anyone else in the building, they have the most intimate knowledge of who the customer is. They live on the front lines and are the first to detect problems and opportunities. They are sincere, raving fans of the brand themselves and are trusted advocates for the others. They’re fun to be around. They aren’t salespeople. Give me one Community Manager who’s been at the job 6 months over 5 focus groups any day. Because not unlike in speed dating, they must immediately learn how to make a positive, lasting impression on fans so they’ll want to return and keep the relationship going. They’re informers and entertainers, with a true belief in the value of the brand’s proposition. Internally, they live at the mercy of the resources allocated toward social. Many, whose managers don’t understand the time involved in properly curating a community, are tasked with 2 or 3 too many of them. 63% of CM’s will spend over 30 hours a week on one community. They come to intuitively know the value of the relationships they’re building, even if they can’t always be shown in a bar graph to the C-suite. Many must communicate how the customer feels to executives that simply don’t seem to want to hear it. Some can get the answers fans want quickly, others are frustrated in their ability to respond within an impressive timeframe. In short, in a corporate world coping with sweeping technological changes, amidst business school doublespeak, pie charts, decks, strat sessions and data points, the role of the Community Manager is the most…human. They are the true emotional connection to the real life customer. Which is why we sought to find a way to recognize and honor who they are, what they do, and how well they have defined the position as social grows and integrates into the larger organization. Meet our 3 finalists for Community Manager of the Year. Jeff Esposito with VistaprintJeff manages and heads up content strategy for all social networks and blogs. He also crafts company-wide policies surrounding the social space. Vistaprint won the NEDMA Gold Award for Twitter Strategy in 2010 and 2011, and a Bronze in 2011 for Social Media Strategy. Prior to Vistaprint, Jeff was Media Relations Manager with the Long Island Ducks. He graduated from Seton Hall University with a BA in English and a minor in Classical Studies. Stacey Acevero with Vocus In addition to social management, Stacey blogs at Vocus on influential marketing and social media, and blogs at PRWeb on public relations and SEO. She’s been named one of the #Nifty50 Women in Tech on Twitter 2 years in a row, as well as included in the 15 up-and-coming PR pros to watch in 2012. Carly Severn with the San Francisco BalletCarly drives engagement, widens the fanbase and generates digital content for America’s oldest professional ballet company. Managed properties include Facebook, Twitter, Tumblr, Pinterest, Instagram, YouTube and G+. Prior to joining the SF Ballet, Carly was Marketing & Press Coordinator at The Fitzwilliam Museum at Cambridge, where she graduated with a degree in English. We invite you to join us at the first annual Oracle Social Media Summit November 14 and 15 at the Wynn in Las Vegas where our finalists will be featured. Over 300 top brand marketers, agency executives, and social leaders & innovators will be exploring how social is transforming business. Space is limited and the information valuable, so get more info and get registered as soon as possible at the event site.

    Read the article

  • Applying for internship

    - by Margus
    At the moment I'm thinking about applying for internship at Eesti Energia. I seem to be eligible, but before contacting them I need to learn how to compile an informative and complete CV and cover letter. I do not consider myself as shallow minded, but also I'm not sure how to convincingly justify the stand of interest and how internship will help me in my future career. Course of life Tallinna Tehnikagümnaasium 2003 - 2006 Tallinna Tehnikaülikool 2006 – 2009 Military service at Signal Batallion Tallinna Tehnikaülikool 2010 – ... I started my academic career as Computer and Systems Engineer, but as I excelled in programming classes, I changed my major to Software Engineer and taken my specialty in web applications and logic. Nowadays I mainly use Java, Mathematica and C# to solve problems. For 2 times, I have taken part in ACM International Collegiate Programming Contest, where my team won the nationals and did pretty well in Europe. Also as part of notable thing in my academic career, my team wrote the Kalah game AI, that won in University's main programming class AI tournament. My hobbies are mind games and occasional problem solving. Few years ago I also competed in International Checkers EM (requires being in top 3 in nationals) as part of cadet and junior age group - I did not come close to winning, but I exceeded about half of the players each time. In high school and gymnasium I took part and later was the captain of team, that passes regionals and made it to top 3 of nationals (and later won) in (blitz) russian checkers. That was impressive because, it was a team effort as we only had (depending on year) 2-3 strong players. Although I started programming exactly 9,5 years ago I have no work experience. Well actually thats not true, as I completed my army duty, I was hired for a year (days still counting) to be apart of communicational (emergency) infrastructure action group where I'm the teams IT specialist (it's more complicated). So I consider myself to be aware of: rough conditions, teamwork, high stress tolerance, being on time and what responsibility means. As negative things I can mention: I do not have drivers licence. Although only Estonian and English are noted as requirements, then Russian is most likely required as well and I barely understand some of it. Reasons why I want to apply there, are: I need to do at least 4-6 week traineeship and it's in the right field I have the requirements and tasks seem easy enough Company is well known and has fairly good reputation Family and friend think, that it would be acceptable place to work Myriad of options to do final thesis about open up Work place is located in the same city I live atm. At moment, I see myself having a hard time explain why I would prefer it or where I see myself in 10 years if I was offered a job there. Question I have some idea how Curriculum Vitæ should look like, or I can google for template, but I'm not sure how to write informative one. Last I did one, it looked like: picture + contact information + education. Vaguely I only remember, that cover letter should be custom tailored for each place you apply containing ...

    Read the article

  • Searching for context in Silverlight applications

    - by PeterTweed
    A common behavior in business applications that have developed through the ages is for a user to be able to get information or execute commands in relation to some information/function displayed by right clicking the object in question and popping up a context menu that offers relevant options to choose. The Silverlight Toolkit April 2010 release introduced the context menu object.  This can be added to other UI objects and display options for the user to choose.  The menu items can be enabled or disabled as per your application logic and icons can be added to the menu items to add visual effect.  This post will walk you through how to use the context menu object from the Silverlight Toolkit. Steps: 1. Create a new Silverlight 4 application 2. Copy the following namespace definition to the user control object of the MainPage.xaml file: xmlns:my="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Input.Toolkit"   3. Copy the following XAML into the LayoutRoot grid in MainPage.xaml:          <Border CornerRadius="15" Background="Blue" Width="400" Height="100">             <TextBlock Foreground="White" FontSize="20" Text="Context Menu In This Border...." HorizontalAlignment="Center" VerticalAlignment="Center" >             </TextBlock>             <my:ContextMenuService.ContextMenu>                 <my:ContextMenu >                     <my:MenuItem                 Header="Copy"                 Click="CopyMenuItem_Click" Name="copyMenuItem">                         <my:MenuItem.Icon>                             <Image Source="copy-icon-small.png"/>                         </my:MenuItem.Icon>                     </my:MenuItem>                     <my:Separator/>                     <my:MenuItem Name="pasteMenuItem"                 Header="Paste"                 Click="PasteMenuItem_Click">                         <my:MenuItem.Icon>                             <Image Source="paste-icon-small.png"/>                         </my:MenuItem.Icon>                     </my:MenuItem>                 </my:ContextMenu>             </my:ContextMenuService.ContextMenu>         </Border>   The above code associates a context menu with two menu items and a separator between them to the border object.  The menu items has icons associated with them to add visual appeal.  The menu items have click event handlers that will be added in the MainPage.xaml.cs code behind in a later step. 4. Add two icon sized images to the ClientBin directory of the web project hosting the Silverlight application, named copy-icon-small.png and paste-icon-small.jpg respectively.  I used copy and paste icons as the names suggest. 5. Add the following code to the class in MainPage.xaml.cs file:         private void CopyMenuItem_Click(object sender, RoutedEventArgs e)         {             MessageBox.Show("Copy selected");         }           private void PasteMenuItem_Click(object sender, RoutedEventArgs e)         {             MessageBox.Show("Paste selected");         }   This code adds the event handlers for the menu items defined in step 3. 6. Run the application, right click on the border and select a menu option and see the appropriate message box displayed. Congratulations it’s that easy!   Take the Slalom Challenge at www.slalomchallenge.com!

    Read the article

  • Review: Windows 8 - Initial Experience

    - by Tim Murphy
    I originally started this post when I had the Windows 8 preview setup on VirtualBox image.  I have since put the RTM bits on a Dell E6530 that is my new work laptop.  It isn’t a table so I am not getting the touch experience, but as a developer this makes the most sense for the moment. This is the first Windows OS that I have had to spend much time exploring to even get started.  The first thing I ran into was when I clicked on the desktop icon I was lost.  Where is the Start menu? Where are my programs?  How do I get back to the Metro environment?  I finally tried hitting the Windows button and it popped back out to the Metro screen. Once I got past that I found that the look of the Metro interface is clean and well organized.  It should be familiar to anyone who is already using a Zune or Windows Phone 7.  In the Desktop, aside from the lack of the Start button to bring up programs the desktop is just like the Windows 7 environment we are all used to.  I do have to say though that I don’t like popping out to the Metro screen to find program.  I think installers for programs like ones that developers usually work in for a desktop mode will need to give an option for creating a desktop icon and pinning to the task bar of the desktop. One of the things I do really enjoy is having live tiles in the Metro environment.  It is a nice way of feeding my need for constant information.  The one drawback though is that the task bar at the bottom of my screen used to be where I got this information without leaving what I was working on.  It allowed me to see current temperatures and when there were messages waiting.  I have since found that these still work as expected in the Desktop and Toast message keep you up on what is going on in the Metro apps. Thankfully familiar functionality like Alt-Tab and Windows-Tab still work regardless of if apps are in the Metro or the Desktop environment.  Add to this the ability to find any application on the Metro screen by simply typing and things get very comfortable. I also started exploring some of the apps.  If you want see a ton of stats on your team at a glance check out the Sports app.  What games are coming up? Who are the leaders in a number of stats?  The Weather and Finance apps have good features as well and I am sure they will improve as users supply feedback. I have had to install Visual Studio 2010 side-by-side with VS2012 because the Windows Phone 7 SDK would only install on VS2010.  This isn’t a Windows 8 issue per se, but something that you need to be aware of if you are a developer moving to the new ecosystem. The overall experience is a joy despite a few hiccups.  For anyone moving to Windows 8 in on a non-touch laptop or desktop I do suggest this list of keyboard shortcuts.  Enjoy. del.icio.us Tags: Windows 8,Win8,Metro,Review

    Read the article

  • Thinking Local, Regional and Global

    - by Apeksha Singh-Oracle
    The FIFA World Cup tournament is the biggest single-sport competition: it’s watched by about 1 billion people around the world. Every four years each national team’s manager is challenged to pull together a group players who ply their trade across the globe. For example, of the 23 members of Brazil’s national team, only four actually play for Brazilian teams, and the rest play in England, France, Germany, Spain, Italy and Ukraine. Each country’s national league, each team and each coach has a unique style. Getting all these “localized” players to work together successfully as one unit is no easy feat. In addition to $35 million in prize money, much is at stake – not least national pride and global bragging rights until the next World Cup in four years time. Achieving economic integration in the ASEAN region by 2015 is a bit like trying to create the next World Cup champion by 2018. The team comprises Brunei Darussalam, Cambodia, Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Singapore, Thailand and Vietnam. All have different languages, currencies, cultures and customs, rules and regulations. But if they can pull together as one unit, the opportunity is not only great for business and the economy, but it’s also a source of regional pride. BCG expects by 2020 the number of firms headquartered in Asia with revenue exceeding $1 billion will double to more than 5,000. Their trade in the region and with the world is forecast to increase to 37% of an estimated $37 trillion of global commerce by 2020 from 30% in 2010. Banks offering transactional banking services to the emerging market place need to prepare to repond to customer needs across the spectrum – MSMEs, SMEs, corporates and multi national corporations. Customers want innovative, differentiated, value added products and services that provide: • Pan regional operational independence while enabling single source of truth at a regional level • Regional connectivity and Cash & Liquidity  optimization • Enabling Consistent experience for their customers  by offering standardized products & services across all ASEAN countries • Multi-channel & self service capabilities / access to real-time information on liquidity and cash flows • Convergence of cash management with supply chain and trade finance While enabling the above to meet customer demands, the need for a comprehensive and robust credit management solution for effective regional banking operations is a must to manage risk. According to BCG, Asia-Pacific wholesale transaction-banking revenues are expected to triple to $139 billion by 2022 from $46 billion in 2012. To take advantage of the trend, banks will have to manage and maximize their own growth opportunities, compete on a broader scale, manage the complexity within the region and increase efficiency. They’ll also have to choose the right operating model and regional IT platform to offer: • Account Services • Cash & Liquidity Management • Trade Services & Supply Chain Financing • Payments • Securities services • Credit and Lending • Treasury services The core platform should be able to balance global needs and local nuances. Certain functions need to be performed at a regional level, while others need to be performed on a country level. Financial reporting and regulatory compliance are a case in point. The ASEAN Economic Community is in the final lap of its preparations for the ultimate challenge: becoming a formidable team in the global league. Meanwhile, transaction banks are designing their own hat trick: implementing a world-class IT platform, positioning themselves to repond to customer needs and establishing a foundation for revenue generation for years to come. Anand Ramachandran Senior Director, Global Banking Solutions Practice Oracle Financial Services Global Business Unit

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • Pinterest and the Rising Power of Imagery

    - by Mike Stiles
    If images keep you glued to a screen, you’re hardly alone. Countless social users are letting their eyes do the walking, waiting for that special photo to grab their attention. And perhaps more than any other social network, Pinterest has been giving those eyes plenty of room to walk. Pinterest came along in 2010. Its play was that users could simply create topic boards and pin pictures to the appropriate boards for sharing. Yes there are some words, captions mostly, but not many. The speed of its growth raised eyebrows. Traffic quadrupled in the last quarter of 2011, with 7.51 million unique visitors in December alone. It now gets 1.9 billion monthly page views. And it was sticky. In the US, the average time a user spends strolling through boards and photos on Pinterest is 15 minutes, 50 seconds. Proving the concept of browsing a catalogue is not dead, it became a top 5 referrer for several apparel retailers like Land’s End, Nordstrom, and Bergdorfs. Now a survey of online shoppers by BizRate Insights says that Pinterest is responsible for more purchases online than Facebook. Over 70% of its users are going there specifically to keep up with trends and get shopping ideas. And when they buy, the average order value is $179. Pinterest is also scoring better in terms of user engagement. 66% of pinners regularly follow and repin retailers, whereas 17% of Facebook fans turn to that platform for purchase ideas. (Facebook still wins when it comes to reach and driving traffic to 3rd-party sites by the way). Social posting best practices have consistently shown that posts with photos are rewarded with higher engagement levels. You may be downright Shakespearean in your writing, but what makes images in the digital world so much more powerful than prose? 1. They transcend language barriers. 2. They’re fun and addictive to look at. 3. They can be consumed in fractions of a second, important considering how fast users move through their social content (admit it, you do too). 4. They’re efficient gateways. A good picture might get them to the headline. A good headline might then get them to the written content. 5. The audience for them surpasses demographic limitations. 6. They can effectively communicate and trigger an emotion. 7. With mobile use soaring, photos are created on those devices and easily consumed and shared on them. Pinterest’s iPad app hit #1 in the Apple store in 1 day. Even as far back as 2009, over 2.5 billion devices with cameras were on the streets generating in just 1 year, 10% of the number of photos taken…ever. But let’s say you’re not a retailer. What if you’re a B2B whose products or services aren’t visual? Should you worry about your presence on Pinterest? As with all things, you need a keen awareness of who your audience is, where they reside online, and what they want to do there. If it doesn’t make sense to put a tent stake in Pinterest, fine. But ignore the power of pictures at your own peril. If not visually, how are you going to attention-grab social users scrolling down their News Feeds at top speed? You’re competing with every other cool image out there from countless content sources. Bore us and we’ll fly right past you.

    Read the article

  • Why is the framerate (fps) capped at 60?

    - by dennmat
    ISSUE I recently moved a project from my laptop to my desktop(machine info below). On my laptop the exact same code displays the fps(and ms/f) correctly. On my desktop it does not. What I mean by this is on the laptop it will display 300 fps(for example) where on my desktop it will show only up to 60. If I add 100 objects to the game on the laptop I'll see my frame rate drop accordingly; the same test on the desktop results in no change and the frames stay at 60. It takes a lot(~300) entities before I'll see a frame drop on the desktop, then it will descend. It seems as though its "theoretical" frames would be 400 or 500 but will never actually get to that and only do 60 until there's too much to handle at 60. This 60 frame cap is coming from no where. I'm not doing any frame limiting myself. It seems like something external is limiting my loop iterations on the desktop, but for the last couple days I've been scratching my head trying to figure out how to debug this. SETUPS Desktop: Visual Studio Express 2012 Windows 7 Ultimate 64-bit Laptop: Visual Studio Express 2010 Windows 7 Ultimate 64-bit The libraries(allegro, box2d) are the same versions on both setups. CODE Main Loop: while(!abort) { frameTime = al_get_time(); if (frameTime - lastTime >= 1.0) { lastFps = fps/(frameTime - lastTime); lastTime = frameTime; avgMspf = cumMspf/fps; cumMspf = 0.0; fps = 0; } /** DRAWING/UPDATE CODE **/ fps++; cumMspf += al_get_time() - frameTime; } Note: There is no blocking code in the loop at any point. Where I'm at My understanding of al_get_time() is that it can return different resolutions depending on the system. However the resolution is never worse than seconds, and the double is represented as [seconds].[finer-resolution] and seeing as I'm only checking for a whole second al_get_time() shouldn't be responsible. My project settings and compiler options are the same. And I promise its the same code on both machines. My googling really didn't help me much, and although technically it's not that big of a deal. I'd really like to figure this out or perhaps have it explained, whichever comes first. Even just an idea of how to go about figuring out possible causes, because I'm out of ideas. Any help at all is greatly appreciated. EDIT: Thanks All. For any others that find this to disable vSync(windows only) in opengl: First get "wglext.h". It's all over the web. Then you can use a tool like GLee or just write your own quick extensions manager like: bool WGLExtensionSupported(const char *extension_name) { PFNWGLGETEXTENSIONSSTRINGEXTPROC _wglGetExtensionsStringEXT = NULL; _wglGetExtensionsStringEXT = (PFNWGLGETEXTENSIONSSTRINGEXTPROC) wglGetProcAddress("wglGetExtensionsStringEXT"); if (strstr(_wglGetExtensionsStringEXT(), extension_name) == NULL) { return false; } return true; } and then create and setup your function pointers: PFNWGLSWAPINTERVALEXTPROC wglSwapIntervalEXT = NULL; PFNWGLGETSWAPINTERVALEXTPROC wglGetSwapIntervalEXT = NULL; if (WGLExtensionSupported("WGL_EXT_swap_control")) { // Extension is supported, init pointers. wglSwapIntervalEXT = (PFNWGLSWAPINTERVALEXTPROC) wglGetProcAddress("wglSwapIntervalEXT"); // this is another function from WGL_EXT_swap_control extension wglGetSwapIntervalEXT = (PFNWGLGETSWAPINTERVALEXTPROC) wglGetProcAddress("wglGetSwapIntervalEXT"); } Then just call wglSwapIntervalEXT(0) to disable vSync and 1 to enable vSync. I found the reason this is windows only is that openGl actually doesn't deal with anything other than rendering it leaves the rest up to the OS and Hardware. Thanks everyone saved me a lot of time!

    Read the article

  • installer can't find partition, but fdisk can find them

    - by pxd
    I'm installing ubuntu 12.04, my system had install 2 system -- winxp and ubuntu 10.10. Now, I want to update ubuntu to 12.04. I use usb disk to install 12.04. But, the installer can't not find my partition in my harddisk. But, the fdisk can find them. Can you help me? How to do? ubuntu@ubuntu:~$ sudo lshw -short H/W path Device Class Description system HP 2230s (NN868PA#AB2) /0 bus 3037 /0/9 memory 64KiB BIOS /0/0 processor Intel(R) Core(TM)2 Duo CPU T6570 @ 2.10GHz /0/0/1 memory 2MiB L2 cache /0/0/3 memory 32KiB L1 cache /0/0/0.1 processor Logical CPU /0/0/0.2 processor Logical CPU /0/2 memory 32KiB L1 cache /0/4 memory 2GiB System Memory /0/4/0 memory SODIMM [empty] /0/4/1 memory 2GiB SODIMM DDR2 Synchronous 800 MHz (1.2 ns) /0/100 bridge Mobile 4 Series Chipset Memory Controller Hub /0/100/2 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/2.1 display Mobile 4 Series Chipset Integrated Graphics Controller /0/100/1a bus 82801I (ICH9 Family) USB UHCI Controller #4 /0/100/1a.1 bus 82801I (ICH9 Family) USB UHCI Controller #5 /0/100/1a.2 bus 82801I (ICH9 Family) USB UHCI Controller #6 /0/100/1a.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #2 /0/100/1b multimedia 82801I (ICH9 Family) HD Audio Controller /0/100/1c bridge 82801I (ICH9 Family) PCI Express Port 1 /0/100/1c.1 bridge 82801I (ICH9 Family) PCI Express Port 2 /0/100/1c.1/0 wlan1 network PRO/Wireless 5100 AGN [Shiloh] Network Connection /0/100/1c.2 bridge 82801I (ICH9 Family) PCI Express Port 3 /0/100/1c.4 bridge 82801I (ICH9 Family) PCI Express Port 5 /0/100/1c.5 bridge 82801I (ICH9 Family) PCI Express Port 6 /0/100/1c.5/0 eth1 network 88E8072 PCI-E Gigabit Ethernet Controller /0/100/1d bus 82801I (ICH9 Family) USB UHCI Controller #1 /0/100/1d.1 bus 82801I (ICH9 Family) USB UHCI Controller #2 /0/100/1d.2 bus 82801I (ICH9 Family) USB UHCI Controller #3 /0/100/1d.7 bus 82801I (ICH9 Family) USB2 EHCI Controller #1 /0/100/1e bridge 82801 Mobile PCI Bridge /0/100/1f bridge ICH9M LPC Interface Controller /0/100/1f.2 scsi0 storage 82801IBM/IEM (ICH9M/ICH9M-E) 4 port SATA Controller [AHCI mode] /0/100/1f.2/0 /dev/sda disk 500GB WDC WD5000BEVT-0 /0/100/1f.2/0/1 /dev/sda1 volume 48GiB Windows NTFS volume /0/100/1f.2/0/2 /dev/sda2 volume 416GiB Extended partition /0/100/1f.2/0/2/5 /dev/sda5 volume 97GiB HPFS/NTFS partition /0/100/1f.2/0/2/6 /dev/sda6 volume 198GiB HPFS/NTFS partition /0/100/1f.2/0/2/7 /dev/sda7 volume 27GiB Linux filesystem partition /0/100/1f.2/0/2/8 /dev/sda8 volume 93GiB Linux filesystem partition /0/100/1f.2/1 /dev/cdrom disk CDDVDW TS-L633M /0/1 scsi6 storage /0/1/0.0.0 /dev/sdb disk 15GB STORAGE DEVICE /0/1/0.0.0/0 /dev/sdb disk 15GB /0/1/0.0.0/0/1 /dev/sdb1 volume 14GiB Windows FAT volume /1 power HZ04037 ubuntu@ubuntu:~$ ubuntu@ubuntu:~$ sudo fdisk -l Disk /dev/sda: 500.1 GB, 500107862016 bytes 255 heads, 63 sectors/track, 60801 cylinders, total 976773168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x31263125 Device Boot Start End Blocks Id System /dev/sda1 * 63 102277727 51138832+ 7 HPFS/NTFS/exFAT /dev/sda2 102277728 976784129 437253201 f W95 Ext'd (LBA) /dev/sda5 102277791 307078127 102400168+ 7 HPFS/NTFS/exFAT /dev/sda6 307078191 724141151 208531480+ 7 HPFS/NTFS/exFAT /dev/sda7 724142080 781459455 28658688 83 Linux /dev/sda8 781461504 976771071 97654784 83 Linux Disk /dev/sdb: 15.9 GB, 15931539456 bytes 64 heads, 32 sectors/track, 15193 cylinders, total 31116288 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x0009eb92 Device Boot Start End Blocks Id Systemfile:///home/ubuntu/Pictures/Screenshot%20from%202012-07-07%2010:25:40.png /dev/sdb1 * 32 31115263 15557616 c W95 FAT32 (LBA) ubuntu 12.04 installer can't find the partition in my hard disk, only find device - /dev/sda.(sorry, I'm new user, so can't send image.)

    Read the article

  • Javascript Inheritance Part 2

    - by PhubarBaz
    A while back I wrote about Javascript inheritance, trying to figure out the best and easiest way to do it (http://geekswithblogs.net/PhubarBaz/archive/2010/07/08/javascript-inheritance.aspx). That was 2 years ago and I've learned a lot since then. But only recently have I decided to just leave classical inheritance behind and embrace prototypal inheritance. For most of us, we were trained in classical inheritance, using class hierarchies in a typed language. Unfortunately Javascript doesn't follow that model. It is both classless and typeless, which is hard to fathom for someone who's been using classes the last 20 years. For the last two or three years since I've got into Javascript I've been trying to find the best way to force it into the class model without much success. It's clunky and verbose and hard to understand. I think my biggest problem was that it felt so wrong to add or change object members at run time. Every time I did it I felt like I needed a shower. That's the 20 years of classical inheritance in me. Finally I decided to embrace change and do something different. I decided to use the factory pattern to build objects instead of trying to use inheritance. Javascript was made for the factory pattern because of the way you can construct objects at runtime. In the factory pattern you have a factory function that you call and tell it to give you a certain type of object back. The factory function takes care of constructing the object to your specification. Here's an example. Say we want to have some shape objects and they have common attributes like id and area that we want to depend on in other parts of your application. So first thing to do is create a factory object and give it a factory method to create an abstract shape object. The factory method builds the object then returns it. var shapeFactory = { getShape: function(id){ var shape = { id: id, area: function() { throw "Not implemented"; } }; return shape; }}; Now we can add another factory method to get a rectangle. It calls the getShape() method first and then adds an implementation to it. getRectangle: function(id, width, height){ var rect = this.getShape(id); rect.width = width; rect.height = height; rect.area = function() { return this.width * this.height; }; return rect;} That's pretty simple right? No worrying about hooking up prototypes and calling base constructors or any of that crap I used to do. Now let's create a factory method to get a cuboid (rectangular cube). The cuboid object will extend the rectangle object. To get the area we will call into the base object's area method and then multiply that by the depth. getCuboid: function(id, width, height, depth){ var cuboid = this.getRectangle(id, width, height); cuboid.depth = depth; var baseArea = cuboid.area; cuboid.area = function() { var a = baseArea.call(this); return a * this.depth; } return cuboid;} See how we called the area method in the base object? First we save it off in a variable then we implement our own area method and use call() to call the base function. For me this is a lot cleaner and easier than trying to emulate class hierarchies in Javascript.

    Read the article

  • Oracle/Sun ?????? - SPARC SuperCluster T4-4

    - by user12798668
    SPARC SuperCluster T4-4 ?????????? SPARC SuperCluster ? 2010?12?????·???????????????????? 2011 ? 9 ?? SPARC T4 ???????????? SPARC SuperCluster T4-4 ????????????????SPARC SuperCluster ??????????·??????????????????????????? SPARC T4 CPU ? 4 ????? SPARC T4-4 ??????????????????????·????????????????????? Exadata ????????????? Oracle Exadata Storage Server ????????????? Java ????????? Exalogic ????????????? Exalogic Software ???????????????????????? Solaris 10 ??? 11 ??????????????????????? SPARC SuperCluster ? ???????????????????? ???????????????????????SPARC SuperCluster ? Oracle/Sun ???????????????????????????????????? SPARC SuperCluster ??????????? 2(Half Rack ?) or 4(Full Rack ?) x SPARC T4-4 ???? 3 (Half Rack ?) or 6 (Full Rack ?) x Exadata Storage Server X2-2 1 x ZFS Storage Appliance 7320 ?????? 3 x Sun DataCenter InfiniBand Switch 36 1 x Ethernet Management Switch 42U Rack (2 x PDU) SPARC SuperCluster ????????????? OS: Oracle Solaris 11 ??? 10 ???: Oracle VM Server for SPARC ??? Oracle Solaris Zones ??: Oracle Enterprise Manager Ops Center ??? Grid Control ???????: Oracle Solaris Cluster ??? Oracle Clusterware ??????: Oracle ?????? 11g R2 (11.2.0.3) ???????????? ??????: Exalogic Software ???? Oracle WebLogic Server, Coherence ????????: Oracle Solaris 11 ??? 10 ????????? Oracle ???? ISV????????????? SPARC SuperCluster ???????·??????????????????????? ???????????????????????????????????????? ??????????????????????????????????????????? ??????????????????????????????????????? ???????????????????? SPARC SuperCluster ??????????????????????????????? ??????????? SuperCluster ?????????????Oracle OpenWorld Tokyo 2012 ????????????????????! 4 ? 5 ?????????????????????????????? Oracle OpenWorld Tokyo 2012 ??????????? SPARC SuperCluster ???????????????? ????????????????? 4/5(?) ????????? G2-01 ?SPARC SuperCluster ????????????????? Ops Center ????????????????(11:50 - 13:20) 4/5(?) S2-42 ???UNIX?????????? - SPARC SuperCluster? (16:30 - 17:15) 4/5(?) S2-53 ?Oracle E-Business Suite?????????????????? ??/??????????????????????”SPARC SuperCluster”?(17:40 - 18:25) ???????????!! Oracle OpenWorld Tokyo 2012 ???? URL http://www.oracle.com/openworld/jp-ja/index.html ?????? 7264 ???????????????

    Read the article

  • ?????????????Fusion Middleware??????????? ?4?

    - by rika.tokumichi
    ??????????OTN????????? ??OTN???????Fusion Middleware???????????????????????????????????????????????? ?OTN????????????????????????????? 2010?3??????????????????? ???????????? Oracle Direct Seminar?????????? ???! ????? ?????????????? OTN???????????????????! ???Fusion Middleware????????????DB?????????????????????????????? ???????????????????? ···?????????????????5???? ????????????????????????????????????? ????????Oracle WebLogic Server? ???????????????! ???? ?Oracle Direct Seminar??????????? ????????? ?????????????????????????????? Oracle Direct Seminar??????????????????? ?????????????????????????????????????? -------------------------------------------- ????????????????????????????Oracle Direct Seminar(????)? ???????????????????????????????????????????????????????? ???????????????????????????????????? ??????????????????????????????????????????? ????????????????/?????????????????????????????????????????????????????????? ????????????????????????????????????????????????????? ?????????????(????)? ?????????????????????????????? ??????????????????????????????????? ?????????????????????????????????? ???????????????????!?DB?Upgrade???????? ?????????????????????????? ???????????????????????????? ??????????????????????????????????????????????????? Oralce Database????????????????????????????????????????????? ???????????????????????????????????????????????????????????????????????????? ?????????????(????)? ????????????????????? ?????????????????????????????????????????????? ?????????(??????????)? ??????????????????????(??????????)? ????????????????????????????????????????????????????? ??????????????????????·??????????????????? ??????????????????????????????? ????????????????????????????????????????????????????????????????????????????????????? ??????????????????????????????????????????OTN???????????????????????????????? ???????????????????????????????????????????????? -------------------------------------------- ????????????????????????????????????????! ???????????????! ?????????????????????????????! >Oracle Direct Seminar?????????? >?????????!????????????OTN???? ????????! ?????????????????????????????????? ????????????????????????????????????????? ???????????????OTN??????????????????????????????????????????????????????????????????????????????????????????? ?????????????????????????? ?Developers Summit????????????????????2?????????·????????? ???? ?????????????????? ???? ??(?????????? ????????????????????)?? ?????? ??(Fusion Middleware?????? Fusion Middleware???????? ??????????????) ??? ????????????????????????????^^ >???????????????????1(?????? ???) >???????????????????2(???????) ???4???????????????????????????????????????????????? ?Oracle VM?Oracle WebLogic Server????????? ??????????????????????????????????!! ???????????? ????????????2????????????????????? ?????????????????? ???OTN?????????????????????????????? ????????????????!! ???????????????????????! >????????Oracle Customer Successes? ?????4??????????50??(???)?????????????????? ?????????????????(by Applications)?????????(by Technology)???(by Industry)???????(by Services)?????????? ??????????????????????????????? ????????!????????????????????! ?????????????????????Fusion Middleware??????????? ?4??????????? ??????????????????????! Fusion Middleware???????????! ?INFORMATION INDEPTH NEWSLETTERS Fusion Middleware Edition???????! ??1?OTN???????Fusion Middleware???????????????? ???????????????????????????????????????????????? ???????????????????????!! ?????????? Oracle/OTN????????????????????????????????????????????????? ????????????????????????????????????? INFORMATION INDEPTH NEWSLETTERS Fusion Middleware Edition??????????????????????? >???????????????????????? ??????????????????????Oracle's Dev2DBA Newsletter?????????????^^ >?Oracle's Dev2DBA Newsletter?????????????????????????????????

    Read the article

  • ASP.Net MVC 2 Auto Complete Textbox With Custom View Model Attribute & EditorTemplate

    - by SeanMcAlinden
    In this post I’m going to show how to create a generic, ajax driven Auto Complete text box using the new MVC 2 Templates and the jQuery UI library. The template will be automatically displayed when a property is decorated with a custom attribute within the view model. The AutoComplete text box in action will look like the following:   The first thing to do is to do is visit my previous blog post to put the custom model metadata provider in place, this is necessary when using custom attributes on the view model. http://weblogs.asp.net/seanmcalinden/archive/2010/06/11/custom-asp-net-mvc-2-modelmetadataprovider-for-using-custom-view-model-attributes.aspx Once this is in place, make sure you visit the jQuery UI and download the latest stable release – in this example I’m using version 1.8.2. You can download it here. Add the jQuery scripts and css theme to your project and add references to them in your master page. Should look something like the following: Site.Master <head runat="server">     <title><asp:ContentPlaceHolder ID="TitleContent" runat="server" /></title>     <link href="../../Content/Site.css" rel="stylesheet" type="text/css" />     <link href="../../css/ui-lightness/jquery-ui-1.8.2.custom.css" rel="stylesheet" type="text/css" />     <script src="../../Scripts/jquery-1.4.2.min.js" type="text/javascript"></script>     <script src="../../Scripts/jquery-ui-1.8.2.custom.min.js" type="text/javascript"></script> </head> Once this is place we can get started. Creating the AutoComplete Custom Attribute The auto complete attribute will derive from the abstract MetadataAttribute created in my previous post. It will look like the following: AutoCompleteAttribute using System.Collections.Generic; using System.Web.Mvc; using System.Web.Routing; namespace Mvc2Templates.Attributes {     public class AutoCompleteAttribute : MetadataAttribute     {         public RouteValueDictionary RouteValueDictionary;         public AutoCompleteAttribute(string controller, string action, string parameterName)         {             this.RouteValueDictionary = new RouteValueDictionary();             this.RouteValueDictionary.Add("Controller", controller);             this.RouteValueDictionary.Add("Action", action);             this.RouteValueDictionary.Add(parameterName, string.Empty);         }         public override void Process(ModelMetadata modelMetaData)         {             modelMetaData.AdditionalValues.Add("AutoCompleteUrlData", this.RouteValueDictionary);             modelMetaData.TemplateHint = "AutoComplete";         }     } } As you can see, the constructor takes in strings for the controller, action and parameter name. The parameter name will be used for passing the search text within the auto complete text box. The constructor then creates a new RouteValueDictionary which we will use later to construct the url for getting the auto complete results via ajax. The main interesting method is the method override called Process. With the process method, the route value dictionary is added to the modelMetaData AdditionalValues collection. The TemplateHint is also set to AutoComplete, this means that when the view model is parsed for display, the MVC 2 framework will look for a view user control template called AutoComplete, if it finds one, it uses that template to display the property. The View Model To show you how the attribute will look, this is the view model I have used in my example which can be downloaded at the end of this post. View Model using System.ComponentModel; using Mvc2Templates.Attributes; namespace Mvc2Templates.Models {     public class TemplateDemoViewModel     {         [AutoComplete("Home", "AutoCompleteResult", "searchText")]         [DisplayName("European Country Search")]         public string SearchText { get; set; }     } } As you can see, the auto complete attribute is called with the controller name, action name and the name of the action parameter that the search text will be passed into. The AutoComplete Template Now all of this is in place, it’s time to create the AutoComplete template. Create a ViewUserControl called AutoComplete.ascx at the following location within your application – Views/Shared/EditorTemplates/AutoComplete.ascx Add the following code: AutoComplete.ascx <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl" %> <%     var propertyName = ViewData.ModelMetadata.PropertyName;     var propertyValue = ViewData.ModelMetadata.Model;     var id = Guid.NewGuid().ToString();     RouteValueDictionary urlData =         (RouteValueDictionary)ViewData.ModelMetadata.AdditionalValues.Where(x => x.Key == "AutoCompleteUrlData").Single().Value;     var url = Mvc2Templates.Views.Shared.Helpers.RouteHelper.GetUrl(this.ViewContext.RequestContext, urlData); %> <input type="text" name="<%= propertyName %>" value="<%= propertyValue %>" id="<%= id %>" class="autoComplete" /> <script type="text/javascript">     $(function () {         $("#<%= id %>").autocomplete({             source: function (request, response) {                 $.ajax({                     url: "<%= url %>" + request.term,                     dataType: "json",                     success: function (data) {                         response(data);                     }                 });             },             minLength: 2         });     }); </script> There is a lot going on in here but when you break it down it’s quite simple. Firstly, the property name and property value are retrieved through the model meta data. These are required to ensure that the text box input has the correct name and data to allow for model binding. If you look at line 14 you can see them being used in the text box input creation. The interesting bit is on line 8 and 9, this is the code to retrieve the route value dictionary we added into the model metada via the custom attribute. Line 11 is used to create the url, in order to do this I created a quick helper class which looks like the code below titled RouteHelper. The last bit of script is the code to initialise the jQuery UI AutoComplete control with the correct url for calling back to our controller action. RouteHelper using System.Web.Mvc; using System.Web.Routing; namespace Mvc2Templates.Views.Shared.Helpers {     public static class RouteHelper     {         const string Controller = "Controller";         const string Action = "Action";         const string ReplaceFormatString = "REPLACE{0}";         public static string GetUrl(RequestContext requestContext, RouteValueDictionary routeValueDictionary)         {             RouteValueDictionary urlData = new RouteValueDictionary();             UrlHelper urlHelper = new UrlHelper(requestContext);                          int i = 0;             foreach(var item in routeValueDictionary)             {                 if (item.Value == string.Empty)                 {                     i++;                     urlData.Add(item.Key, string.Format(ReplaceFormatString, i.ToString()));                 }                 else                 {                     urlData.Add(item.Key, item.Value);                 }             }             var url = urlHelper.RouteUrl(urlData);             for (int index = 1; index <= i; index++)             {                 url = url.Replace(string.Format(ReplaceFormatString, index.ToString()), string.Empty);             }             return url;         }     } } See it in action All you need to do to see it in action is pass a view model from your controller with the new AutoComplete attribute attached and call the following within your view: <%= this.Html.EditorForModel() %> NOTE: The jQuery UI auto complete control expects a JSON string returned from your controller action method… as you can’t use the JsonResult to perform GET requests, use a normal action result, convert your data into json and return it as a string via a ContentResult. If you download the solution it will be very clear how to handle the controller and action for this demo. The full source code for this post can be downloaded here. It has been developed using MVC 2 and Visual Studio 2010. As always, I hope this has been interesting/useful. Kind Regards, Sean McAlinden.

    Read the article

  • jQuery and Windows Azure

    - by Stephen Walther
    The goal of this blog entry is to describe how you can host a simple Ajax application created with jQuery in the Windows Azure cloud. In this blog entry, I make no assumptions. I assume that you have never used Windows Azure and I am going to walk through the steps required to host the application in the cloud in agonizing detail. Our application will consist of a single HTML page and a single service. The HTML page will contain jQuery code that invokes the service to retrieve and display set of records. There are five steps that you must complete to host the jQuery application: Sign up for Windows Azure Create a Hosted Service Install the Windows Azure Tools for Visual Studio Create a Windows Azure Cloud Service Deploy the Cloud Service Sign Up for Windows Azure Go to http://www.microsoft.com/windowsazure/ and click the Sign up Now button. Select one of the offers. I selected the Introductory Special offer because it is free and I just wanted to experiment with Windows Azure for the purposes of this blog entry.     To sign up, you will need a Windows Live ID and you will need to enter a credit card number. After you finish the sign up process, you will receive an email that explains how to activate your account. Accessing the Developer Portal After you create your account and your account is activated, you can access the Windows Azure developer portal by visiting the following URL: http://windows.azure.com/ When you first visit the developer portal, you will see the one project that you created when you set up your Windows Azure account (In a fit of creativity, I named my project StephenWalther).     Creating a New Windows Azure Hosted Service Before you can host an application in the cloud, you must first add a hosted service to your project. Click your project on the summary page and click the New Service link. You are presented with the option of creating either a new Storage Account or a new Hosted Services.     Because we have code that we want to run in the cloud – the WCF Service -- we want to select the Hosted Services option. After you select this option, you must provide a name and description for your service. This information is used on the developer portal so you can distinguish your services.     When you create a new hosted service, you must enter a unique name for your service (I selected jQueryApp) and you must select a region for this service (I selected Anywhere US). Click the Create button to create the new hosted service.   Install the Windows Azure Tools for Visual Studio We’ll use Visual Studio to create our jQuery project. Before you can use Visual Studio with Windows Azure, you must first install the Windows Azure Tools for Visual Studio. Go to http://www.microsoft.com/windowsazure/ and click the Get Tools and SDK button. The Windows Azure Tools for Visual Studio works with both Visual Studio 2008 and Visual Studio 2010.   Installation of the Windows Azure Tools for Visual Studio is painless. You just need to check some agreement checkboxes and click the Next button a few times and installation will begin:   Creating a Windows Azure Application After you install the Windows Azure Tools for Visual Studio, you can choose to create a Windows Azure Cloud Service by selecting the menu option File, New Project and selecting the Windows Azure Cloud Service project template. I named my new Cloud Service with the name jQueryApp.     Next, you need to select the type of Cloud Service project that you want to create from the New Cloud Service Project dialog.   I selected the C# ASP.NET Web Role option. Alternatively, I could have picked the ASP.NET MVC 2 Web Role option if I wanted to use jQuery with ASP.NET MVC or even the CGI Web Role option if I wanted to use jQuery with PHP. After you complete these steps, you end up with two projects in your Visual Studio solution. The project named WebRole1 represents your ASP.NET application and we will use this project to create our jQuery application. Creating the jQuery Application in the Cloud We are now ready to create the jQuery application. We’ll create a super simple application that displays a list of records retrieved from a WCF service (hosted in the cloud). Create a new page in the WebRole1 project named Default.htm and add the following code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> var products = [ {name:"Milk", price:4.55}, {name:"Yogurt", price:2.99}, {name:"Steak", price:23.44} ]; $("#productTemplate").render(products).appendTo("#productContainer"); </script> </body> </html> The jQuery code in this page simply displays a list of products by using a template. I am using a jQuery template to format each product. You can learn more about using jQuery templates by reading the following blog entry by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2010/05/07/jquery-templates-and-data-linking-and-microsoft-contributing-to-jquery.aspx You can test whether the Default.htm page is working correctly by running your application (hit the F5 key). The first time that you run your application, a database is set up on your local machine to simulate cloud storage. You will see the following dialog: If the Default.htm page works as expected, you should see the list of three products: Adding an Ajax-Enabled WCF Service In the previous section, we created a simple jQuery application that displays an array by using a template. The application is a little too simple because the data is static. In this section, we’ll modify the page so that the data is retrieved from a WCF service instead of an array. First, we need to add a new Ajax-enabled WCF Service to the WebRole1 project. Select the menu option Project, Add New Item and select the Ajax-enabled WCF Service project item. Name the new service ProductService.svc. Modify the service so that it returns a static collection of products. The final code for the ProductService.svc should look like this: using System.Collections.Generic; using System.ServiceModel; using System.ServiceModel.Activation; namespace WebRole1 { public class Product { public string name { get; set; } public decimal price { get; set; } } [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class ProductService { [OperationContract] public IList<Product> SelectProducts() { var products = new List<Product>(); products.Add(new Product {name="Milk", price=4.55m} ); products.Add(new Product { name = "Yogurt", price = 2.99m }); products.Add(new Product { name = "Steak", price = 23.44m }); return products; } } }   In real life, you would want to retrieve the list of products from storage instead of a static array. We are being lazy here. Next you need to modify the Default.htm page to use the ProductService.svc. The jQuery script in the following updated Default.htm page makes an Ajax call to the WCF service. The data retrieved from the ProductService.svc is displayed in the client template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> $.post("ProductService.svc/SelectProducts", function (results) { var products = results["d"]; $("#productTemplate").render(products).appendTo("#productContainer"); }); </script> </body> </html>   Deploying the jQuery Application to the Cloud Now that we have created our jQuery application, we are ready to deploy our application to the cloud so that the whole world can use it. Right-click your jQueryApp project in the Solution Explorer window and select the Publish menu option. When you select publish, your application and your application configuration information is packaged up into two files named jQueryApp.cspkg and ServiceConfiguration.cscfg. Visual Studio opens the directory that contains the two files. In order to deploy these files to the Windows Azure cloud, you must upload these files yourself. Return to the Windows Azure Developers Portal at the following address: http://windows.azure.com/ Select your project and select the jQueryApp service. You will see a mysterious cube. Click the Deploy button to upload your application.   Next, you need to browse to the location on your hard drive where the jQueryApp project was published and select both the packaged application and the packaged application configuration file. Supply the deployment with a name and click the Deploy button.     While your application is in the process of being deployed, you can view a progress bar.     Running the jQuery Application in the Cloud Finally, you can run your jQuery application in the cloud by clicking the Run button.   It might take several minutes for your application to initialize (go grab a coffee). After WebRole1 finishes initializing, you can navigate to the following URL to view your live jQuery application in the cloud: http://jqueryapp.cloudapp.net/default.htm The page is hosted on the Windows Azure cloud and the WCF service executes every time that you request the page to retrieve the list of products. Summary Because we started from scratch, we needed to complete several steps to create and deploy our jQuery application to the Windows Azure cloud. We needed to create a Windows Azure account, create a hosted service, install the Windows Azure Tools for Visual Studio, create the jQuery application, and deploy it to the cloud. Now that we have finished this process once, modifying our existing cloud application or creating a new cloud application is easy. jQuery and Windows Azure work nicely together. We can take advantage of jQuery to build applications that run in the browser and we can take advantage of Windows Azure to host the backend services required by our jQuery application. The big benefit of Windows Azure is that it enables us to scale. If, all of the sudden, our jQuery application explodes in popularity, Windows Azure enables us to easily scale up to meet the demand. We can handle anything that the Internet might throw at us.

    Read the article

  • Task Scheduler permissions error for some jobs

    - by MaseBase
    I have recently moved to a 64-bit Windows Server 2008 R2. I setup my Scheduled Tasks to run under one user (TaskUser) specifically created for the scheduler and most run just fine. However some of them do not run under TaskUser but will for my own credentials. Here is the Event Log entry I found, which from my research points me to believe that it doesn't have permissions, but it does. It also has the option "Run with highest privileges" checked on. I have seen this particular checkbox work wonders on some tasks, but I have a number of them that it's not helping for. The error is ERROR_ELEVATION_REQUIRED but the user is a member of the administrators group and has folder/file permission and is set to "Run with highest privileges" Log Name: Microsoft-Windows-UAC/Operational Source: Microsoft-Windows-UAC Date: 4/27/2010 2:21:44 PM Event ID: 1 Task Category: (1) Level: Error Keywords: User: LIVE\TaskUser Computer: www2 Description: The process failed to handle ERROR_ELEVATION_REQUIRED during the creation of a child process. Event Xml: <Event xmlns="http://schemas.microsoft.com/win/2004/08/events/event"> <System> <Provider Name="Microsoft-Windows-UAC" Guid="{E7558269-3FA5-46ED-9F4D-3C6E282DDE55}" /> <EventID>1</EventID> <Version>0</Version> <Level>2</Level> <Task>1</Task> <Opcode>0</Opcode> <Keywords>0x8000000000000000</Keywords> <TimeCreated SystemTime="2010-04-27T21:21:44.407053800Z" /> <EventRecordID>19</EventRecordID> <Correlation /> <Execution ProcessID="2460" ThreadID="5960" /> <Channel>Microsoft-Windows-UAC/Operational</Channel> <Computer>www2</Computer> <Security UserID="S-1-5-21-4017510424-2083581016-1307463562-1640" /> </System> <EventData></EventData> </Event> The errors shown in the Task Scheduler History tab display these results and states This operation requires an interactive window station. (0x800705B3) EventID 103 Task Scheduler failed to launch action "F:\App\Path\ConsoleApp.exe" in instance "{1a6d3450-b85a-40c0-b3db-72b98c1aa395}" of task "\taskFolder\taskName". Additional Data: Error Value: 2147943859. EventID 203 Task Scheduler failed to start instance "{1a6d3450-b85a-40c0-b3db-72b98c1aa395}" of "\taskFolder\taskName" task for user "LIVE\TaskUser" . Additional Data: Error Value: 2147943859.

    Read the article

  • passenger-status - ERROR: Phusion Passenger doesn't seem to be running

    - by Casual Coder
    My server is: Server version: Apache/2.2.11 (Ubuntu) Server built: Aug 16 2010 17:44:11 My ruby version ruby 1.9.2p136 (2010-12-25 revision 30365) [x86_64-linux]. I've installed passenger 3.0.7 via RubyGems. I've run passenger-install-apache2-module and everything went fine. I've modified configuration (load module, edit virtualhost etc.) and restarted Apache. Module is loading fine (apache does not complain). But Passenger is obviously not working: sudo passenger-status ERROR: Phusion Passenger doesn't seem to be running. How can I get it working ? Edit 1: /etc/apache2/mods-enabled/passenger.load LoadModule passenger_module /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.7/ext/apache2/mod_passenger.so Root of passenger: passenger-config --root /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.7 Apache VirtualHost sub URI configuration in /etc/apache2/sites-enabled/railsapps: <VirtualHost <IP ADDRESS>:80> ServerAdmin webmaster@localhost ServerName my.server.name PassengerRoot /usr/lib/ruby/gems/1.9.1/gems/passenger-3.0.7 PassengerRuby /usr/bin/ruby RailsEnv development DocumentRoot /www/vhosts/railsapps <Directory /www/vhosts/railsapps> Options FollowSymlinks -MultiViews AllowOverride None Order allow,deny Allow from all </Directory> RailsBaseURI /siteA <Directory /www/vhosts/railsapps/siteA> Options -MultiViews AllowOverride All Order allow,deny Allow from all </Directory> RailsBaseURI /siteB <Directory /www/vhosts/railsapps/siteB> AllowOverride All Options -MultiViews Order allow,deny Allow from all </Directory> LogLevel info ErrorLog /var/log/apache2/railsapps_error.log CustomLog /var/log/apache2/railsapps_access.log combined </VirtualHost> Of course as in 'users guide apache.html' siteA and siteB are symlinks to siteA/public and siteB/public absolute paths respectively. Edit 2: In logs there is nothing related to passenger. Permissions are also fine (read and executable) on directories in paths. Even if it was some misconfiguration or permission problem isn't passenger suppose to be running? I mean sudo passenger-status should at least output --- general information ---. When I place some test html file in railsapps directory it is served fine. /var/log/apache2/railsapps_error.log [Sun Jun 19 12:19:08 2011] [error] [client <IP>] Directory index forbidden by Options directive: /www/vhosts/railsapps/siteA/ [Sun Jun 19 12:19:08 2011] [error] [client <IP>] File does not exist: /www/vhosts/railsapps/favicon.ico

    Read the article

  • pxe boot fails with message: no DEFAULT or UI configuration directive found

    - by spockaroo
    I am trying to pxe-boot a machine (client), and in the process I am trying to setup a tftp server that this machine can boot off. On the server, which runs Ubuntu 10.10, I have setup dhcp, dns, nfs, and tftp-hpa servers. All the servers/deamons start fine. I tested the tftp server by using a tftp client and downloading a file that the server directory hosts. My /etc/xinet.d/tftp looks like this service tftp { disable = no socket_type = dgram wait = yes user = nobody server = /usr/sbin/in.tftpd server_args = -v -s /var/lib/tftpboot only_from = 10.1.0.0/24 interface = 10.1.0.1 } My /etc/default/tftpd-hpa looks like this RUN_DAEMON="yes" OPTIONS="-l -s /var/lib/tftpboot" TFTP_USERNAME="tftp" TFTP_DIRECTORY="/var/lib/tftpboot" TFTP_ADDRESS="0.0.0.0:69" TFTP_OPTIONS="--secure" My /var/lib/tftpboot/ directory looks like this initrd.img-2.6.35-25-generic-pae vmlinuz-2.6.35-25-generic-pae pxelinux.0 pxelinux.cfg -- default I did sudo chmod 644 /var/lib/tftpboot/pxelinux.cfg/default chmod 755 /var/lib/tftpboot/initrd.img-2.6.35-25-generic-pae chmod 755 /var/lib/tftpboot/vmlinuz-2.6.35-25-generic-pae /var/lib/tftpboot/pxelinux.cfg has the following contents SERIAL 0 19200 0 LABEL linux KERNEL vmlinuz-2.6.35-25-generic-pae APPEND root=/dev/nfs initrd=initrd.img-2.6.35-25-generic-pae nfsroot=10.1.0.1:/nfsroot ip=dhcp console=ttyS0,19200n8 rw I copied /var/lib/tftpboot/pxelinux.0 from /usr/lib/syslinux/ after installing the package syslinux-common. Also just for completeness, /etc/dhcp3/dhcpd.conf the following lines (relevant to this interface) subnet 10.1.0.0 netmask 255.255.255.0 { range 10.1.0.100 10.1.0.240; option routers 10.1.0.1; option broadcast-address 10.1.0.255; option domain-name-servers 10.1.0.1; filename "pxelinux.0"; } When I boot the client machine, and watch the output over the serial port, I notice that the client requests an ip address from the server and gets it. Then I see TFTP being displayed - indicating that it is trying to connect to the TFTP server. This succeeds, and I see TFTP.|, which return immediately displaying the following message PXELINUX 4.01 debian-20100714 Copyright (C) 1994-2010 H. Peter Anvin et al No DEFAULT or UI configuration directive found! boot: /var/log/syslog shows Feb 20 15:24:05 ch in.tftpd[2821]: tftp: client does not accept options What option is it talking about in the syslog? I assume it is referring to OPTIONS or TFTP_OPTIONS, but what am I doing wrong?

    Read the article

  • IIS7 dynamic_compression_not_success Reason 12

    - by Peter Oehlert
    So, I'm a bit of an IIS7 n00b but I've used most of the old IIS systems going back to 3. I'm trying to turn on dynamic compression and it's working, mostly. It doesn't work for my ADO.Net Data Service (Astoria) requests, batched or not. I found the freb tracing which was really helpful. And what I come up with unbatched requests is that it returns Reason Code 12, NO_MATCHING_CONTENT_TYPE. OK, so I don't have the matching mime type specified, that's easy. Except this is what I have in my web.config (which I think is correct, but maybe not). <httpCompression dynamicCompressionDisableCpuUsage="100" dynamicCompressionEnableCpuUsage="100" noCompressionForHttp10="false" noCompressionForProxies="false" noCompressionForRange="false" sendCacheHeaders="true" staticCompressionDisableCpuUsage="100" staticCompressionEnableCpuUsage="100"> <dynamicTypes> <clear/> <add mimeType="*/*" enabled="true" /> </dynamicTypes> <staticTypes> <clear/> <add mimeType="*/*" enabled="true" /> </staticTypes> </httpCompression> <urlCompression doDynamicCompression="true" doStaticCompression="true" dynamicCompressionBeforeCache="false" /> Now I think that this means it should compress any request that includes the Accept:Gzip header. I'd love to know what others might think here. My fiddler trace: GET /SecurityDataService.svc/GetCurrentAccount HTTP/1.1 Accept-Charset: UTF-8 Accept-Language: en-us dataserviceversion: 1.0;Silverlight Accept: application/atom+xml,application/xml maxdataserviceversion: 1.0;Silverlight Referer: http://sdev03/apptestpage.aspx Accept-Encoding: gzip, deflate User-Agent: Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0; WOW64; Trident/4.0; SLCC1; .NET CLR 2.0.50727; .NET CLR 3.5.21022; .NET CLR 3.5.30729; InfoPath.2; .NET CLR 3.0.30729; OfficeLiveConnector.1.4; OfficeLivePatch.1.3) Host: sdev03 Connection: Keep-Alive Cookie: .ASPXAUTH=<snip> HTTP/1.1 200 OK Cache-Control: no-cache Content-Type: application/atom+xml;charset=utf-8 Server: Microsoft-IIS/7.0 DataServiceVersion: 1.0; X-AspNet-Version: 2.0.50727 X-Powered-By: ASP.NET Date: Mon, 22 Mar 2010 22:29:06 GMT Content-Length: 2726 <?xml version="1.0" encoding="utf-8" standalone="yes"?> *** <snip> removed ***

    Read the article

  • /server-status shows over 240 requests like "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy c

    - by Stefan Lasiewski
    Some details: Webserver: Apache/2.2.13 (FreeBSD) mod_ssl/2.2.13 OpenSSL/0.9.8e OS: FreeBSD 7.2-RELEASE This is a FreeBSD Jail. I believe I use the Apache 'prefork' MPM (I run the default for FreeBSD). I use the default values for MaxClients (256) I have enabled mod_status, with "ExtendedStatus On". When I view /server-status , I see a handful of regular requests. I also see over 240 requests from the 'localhost', like these. 37-0 - 0/0/1 . 0.00 1510 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 38-0 - 0/0/1 . 0.00 1509 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 39-0 - 0/0/3 . 0.00 1482 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 40-0 - 0/0/6 . 0.00 1445 0 0.0 0.00 0.00 127.0.0.2 www.example.gov OPTIONS * HTTP/1.0 I also see about 2417 requests yesterday from the localhost, like these: Apr 14 11:16:40 192.168.16.127 httpd[431]: www.example.gov 127.0.0.2 - - [15/Apr/2010:11:16:40 -0700] "OPTIONS * HTTP/1.0" 200 - "-" "Apache (internal dummy connection)" The page at http://wiki.apache.org/httpd/InternalDummyConnection says "These requests are perfectly normal and you do not, in general, need to worry about them", but I'm not so sure. Why are there over 230 of these? Are these active connections? If I have "MaxClients 256", and over 230 of these connections, it seems that my webserver is dangerously close to running out of available connections. It also seems like Apache should only need a handful of these "internal dummy connections" We actually had two unexplained outages last night, and I am wondering if these "internal dummy connection" caused us to run out of available connections. UPDATE 2010/04/16 It is 8 hours later. The /server-status page still shows that there are 243 lines which say "www.example.gov OPTION *". I believe these connections are not active. The server is mostly idle (1 requests currently being processed, 9 idle workers). There are only 18 active httpd processes on the Unix host. If these connections are not active, why do they show up under /server-status? I would have expected them to expire a few minutes after they were initialized.

    Read the article

< Previous Page | 558 559 560 561 562 563 564 565 566 567 568 569  | Next Page >