Search Results

Search found 37607 results on 1505 pages for 'ms access 97'.

Page 1084/1505 | < Previous Page | 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091  | Next Page >

  • Accessing ADF Faces components that are read-only from JavaScript

    - by Frank Nimphius
    Almost as a note to myself and to justify the time I spent on analyzing aproblem, a quick note on what to watch out for when working trying to access read-only ADF Faces components from JavaScript.  Those who tried JavaScript in ADF Faces probably know that you need to ensure the ADF Faces component  is represented by a JavaScript object on the client. You do this either implicitly by adding an af:clientListener component (in case you want to listen for a component event) or explicitly by setting the ADF Faces component clientComponent property to true. For the use case I looked at in JDeveloper 11g R1 (11.1.1.7) I needed to make an output text component clickable to call a JavaScript function in response. Though I added the af:clientComponent tag to the component I recognized that it also needed the clientComponent property set to true. Though I remember this as not being required in 11.1.1.6, I like the new behavior as it helps preventing read-only components from firing client side events unless you tell it to do so by setting the clientComponent property to true. Note: As the time of writing, JDeveloper 11.1.1.7 is not publicly available and I put the note in this blog as a reminder in case you ever hit a similar challenge so you know what to do.

    Read the article

  • Having a Proactive Patch Plan is the way to Go!

    - by user793553
    BUILDING A SUCCESSFUL PATCHING STRATEGY Make Patching Easy! Having a Patching Strategy for your E-Business Suite system is a great way to manage your system downtime, identify the proper resources needed to perform the necessary task and familiarizing yourself with the Patching Tools in EBS. Having a Proactive Patch Plan is the way to Go! Proactive Patching is a preventive measure allowing you to have a complete patching strategy when applying patches periodically. Oracle provides several tools to help you get started to set the foundation for a solid and proactive patching strategy in Note 313.1 - "Patching & Maintenance Advisor: E-Business Suite 11i and R12". It details all the steps and tooling available for the patching strategy along with the benefits. Among other things it covers the following: How to plan ahead for system downtime Patching Tools in E-Business Suite (Autopatch, OUI, OPatch) How to Identify Patches (RUPs, EBS Family Packs, Critical Patch Updates, etc) How to properly test your patching plan and move to Production Make sure you visit the New E-Business Patching Community! We encourage you to access the "E-Business Patching Community" prior to applying an E-Business Suite patch. Doing so will allow you to explore perspectives shared by industry peers, get real-world experiences with the patch, and benefit from known solutions and lessons learned. Additionally, Oracle Support engineers monitor discussion topics to help provide guidance and solutions for your E-Business Suite patching needs. This is a valuable opportunity to "Get Proactive" with the patching and maintenance of your E-Business Suite environment. Start now, and find fast, proactive resolutions before you begin. Related Articles: What's the Best Way to Patch an E-Business Suite Environment? Patch Wizard Utility

    Read the article

  • recent unreliable wireless connection

    - by gabkdlly
    Recently, my internet connection over wireless ( via a Netgear KWGR614 router ) has become unreliable, on both a Dell laptop running Ubuntu 10.04 as well as my Desktop running Ubuntu 10.10 . The problem does not seem to occur on a laptop running Windows Vista, nor on a Desktop running Windows 7 ( this machine is connected with an ethernet cable ). The problem does not seem to occur on my Openmoko Freerunner ( running Android 1.5 ), though I hardly ever use this device to connect over WLAN, so the problem may have just slipped by. On my main Ubuntu Desktop, I have tried the following wireless devices: a Longshine PCI card ( an old device with an RTL8180L chip ) a D-Link DWL-510 PCI card ( this device threw warnings in dmesg ) a USB device from MSI ( US54EX ). Usually my wireless network shows up in the network manager with a normal signal strength, even when the connection speed is slow or the connection gets reset ( asking me to click connect to re-authenticate my wireless connection ). I have observed this problem with a Netgear KWGR614 Router ( with the manufacturers firmware ), as well as with a TP-LINK TL-WR741ND router running OpenWrt. Taking a look at my routers logs, I find many instances of the following line: Tuesday,04 Jan 2011 03:53:01 [TCP SYN Flood][Deny access policy matched, dropping packet] I know that the Netgear router is susceptible to denial of service attacks, as I have previously been able to disrupt its operation by putting an nmap scan into a while loop. I use WEP or WPA to encrypt the wireless network. Is it possible that someone is jamming my signal ?

    Read the article

  • eSTEP TechCast - December 2012

    - by uwes
    Dear partner, we are pleased to announce our next eSTEP TechCast on Thursday 6th of December and would be happy if you could join. Please see below the details for the next TechCast.Date and time:Thursday, 06. December 2012, 11:00 - 12:00 GMT (12:00 - 13:00 CET; 15:00 - 16:00 GST) Title: Innovations with Oracle Solaris Cluster 4 Abstract:Oracle Solaris Cluster 4.0 is the version of Solaris Cluster that runs with Oracle Solaris 11. In this webcast we will focus at the integration of the cluster software with the IPS packaging system of Solaris 11, which makes installing and updating the software much easier and much more reliable, especially with virtualization technologies involved. Our webcast will also reflect new versions of Oracle Solaris Cluster if they will be announced in the meantime. Target audience: Tech Presales Speaker: Hartmut Streppel Call Info:Call-in-toll-free number: 08006948154 (United Kingdom)Call-in-toll-free number: +44-2081181001 (United Kingdom) Show global numbers Conference Code: 803 594 3Security Passcode: 9876Webex Info (Oracle Web Conference) Meeting Number: 255 760 510Meeting Password: tech2011 Playback / Recording / Archive: The webcasts will be recorded and will be available shortly after the event in the eSTEP portal under the Events tab, where you could find also material from already delivered eSTEP TechCasts. Use your email-adress and PIN: eSTEP_2011 to get access. Feel free to have a look. We are happy to get your comments and feedback. Thanks and best regards, Partner HW Enablement EMEA

    Read the article

  • Modular Web App Network Architecture

    - by nairware
    Assuming that I am dealing with dedicated physical servers or VPSs, is it conceivable and does it make sense to have distinct servers setup with the following roles to host a web application? Reverse Proxy Web server Application server Database server Specific points of interest: I am confused how to even separate the web and application servers. My understanding was that such 3-tier architectures were feasible. It is unclear to me if the app server would reside directly between the web and database server, or if the web server could directly interact with the database as well. The app server could either do the computational heavy-lifting on behalf of the app server or it could do heavy-lifting plus control all of the business logic (as implied in the diagram above, thus denying the web server of direct database access). I am also unsure what role the reverse proxy (ex. nginx) could and should fulfill as a web server, given the above mentioned setup. I know that nginx has web server features. But I do not know if it makes sense to have the reverse proxy be its own VPS, given that the web server–in theory–would be separate from the app server.

    Read the article

  • ASP.NET MVC 3 (C#) Software Architecture

    - by ryanzec
    I am starting on a relatively large and ambitious ASP.NET MVC 3 project and just thinking about the best way to organize my code. The project is basically going to be a general management system that will be capable of supporting any type management system whether it be a blogging system, cms, reservation system, wikis, forums, project management system, etc…, each of them being just a separate 'module'. You can read more about it on my blog posted here : http://www.ryanzec.com/index.php/blog/details/8 (forgive me, the style of the site kinda sucks). For those who don't want to read the long blog post the basic idea is that the core system itself is nothing more than a users system with an admin interface to manage the users system. Then you just add on module as you need them and the module I will be creating is a simple blog post to test it out before I move on to the big module which is a project management system. Now I am just trying to think of the best way to structure this so that it is easy for users to add in there own modules but easy for me to update to core system without worrying about the user modifying the core code. I think the ideal way would be to have a number of core projects that user is specifically told not to modify otherwise the system may become unstable and future updates would not work. When the user wants to add in there own modules, they would just add in a new project (or multiple projects). The thing is I am not sure that it is even possible to use multiple projects all with their own controllers, razor view template, css, javascript, etc... in one web application. Ideally each module would have some of it own razor view templates, css, javascript, image files and also need access to some of the core razor view templates, css, javascript, image files which would is in a separate project. It is possible to have 1 web application run off of controllers, razor view templates, css, javascript, image files that are store in multiple projects? Is there a better was to structure this to allow the user to easily add in module with having to modify the core code?

    Read the article

  • VMware Kernel Module Updater hangs on Ubuntu 13.04

    VMware Player has a nice auto-detection of kernel changes, and requests the user to compile the required modules in order to load them. This happens from time to time after a regular update of your system. Usually, the dialog of VMware Kernel Module Updater pops up, asks for root access authentication, and completes the compilation. VMware Player or Workstation checks if modules for the active kernel are available. In theory this is supposed to work flawlessly but in reality there are pitfalls occassionally. With the recent upgrade to Ubuntu 13.04 Raring Ringtail and the latest kernel 3.8.0-21 the actual VMware Kernel Module Updater simply disappeared and the application wouldn't start as expected. When you launch VMware Player as super user (root) the dialog would stall like so: VMware Kernel Module Updater stalls while stopping the services Prior to version 5.x of VMware Player or version 7.x of VMware Workstation you would run a command like: $ sudo vmware-config.pl to resolve the module version conflict but this doesn't work anyway. Solution Instead, you have to execute the following line in a terminal or console window: $ sudo vmware-modconfig --console --install-all Those switches are (as of writing this article) not documented in the output of the --help switch. But VMware already documented this procedure in their knowledge base: VMware Workstation stops functioning after updating the kernel on a Linux host (1002411). Update As of today I had the first kernel upgrade to version 3.8.0-22 in Ubuntu 13.04. Don't even try it without vmware-modconfig...

    Read the article

  • Wirelessly Sync Photos From iPhone To Computer Using CameraSync

    - by Gopinath
    How do you upload photos captured on your iOS device to your computer? By connecting the device using a cable and then syncing up with an app?? Ah..is’nt it a boring way. Here comes CameraSync – an app that lets you wirelessly send your iOS device photos to DropBox, so that you can access on your computer irrespective of the platform (Windows, Mac, Linux). By the way, this app works in the background and syncs the  files without disturbing  you. You don’t like DropBox? CameraSync works with a variety  of cloud services : Flickr, Amazon S3, iDisk, FTP and Box.net. If you looking for a step by step guide on how to setup CameraSync for DropBox then check this post. CameraSync cost $1.99 and runs on iOS4.0+ devices. CameraSync [iTunes App via Lifehacker] This article titled,Wirelessly Sync Photos From iPhone To Computer Using CameraSync, was originally published at Tech Dreams. Grab our rss feed or fan us on Facebook to get updates from us.

    Read the article

  • SEO with duplicate content

    - by user16831
    I have a nature photography site with multiple types of photo galleries. Each photo and associated caption on my site appears in several galleries. For instance, a photo of a goldfinch that was taken on a trip to New Mexico in 2008 will appear in the "goldfinch.php" gallery, in the "finches.php" gallery, and in the "New_Mexico_2008.php" gallery. This duplication is useful for my site visitors - User A may want to see goldfinch photos, whereas User B wants to see photos from New Mexico - but I am concerned about the SEO implications. The typical suggestions to deal with duplicate content, such as 301 redirects and canonical tags, probably won't work in this case, because the page content is substantially different (ranging from ~1% to ~90% duplication, depending on the specific example chosen). The obvious solution to me would be to edit robots.txt to only allow search engines to crawl one type of gallery - for instance, if they crawled only the galleries organized by species(e.g. goldfinch.php), all the photos on my site would be found exactly once. However, the Google content guidelines recommend against blocking crawler access to duplicate information. Should I go ahead and use robots.txt anyway? Or is there a better solution?

    Read the article

  • Setting up a shared media drive

    - by Sam Brightman
    I want to have a shared media drive be transparently usable to all users, whilst also sticking to FHS and Ubuntu standards. The former takes priority if necessary. I currently mount it at /media/Stuff but /media is supposed to be for external media, I believe. The main issue is setting permissions so that access to read and write to the drive can be granted to multiple users working within the same directories. InstallingANewHardDrive seems both slightly confused and not what I want. It claims that this sets ownership for the top-level directory (despite the recursion flag): sudo chown -R USERNAME:USERNAME /media/mynewdrive And that this will let multiple users create files and sub-directories but only delete their own: sudo chgrp plugdev /media/mynewdrive sudo chmod g+w /media/mynewdrive sudo chmod +t /media/mynewdrive However, the group writeable bit does not seem to get inherited, which is troublesome for keeping things organised (prevents creation inside sub-folders originally made by another user). The sticky bit is probably also unwanted for the same reason, although currently it seems that one userA (perhaps the owner of the mount-point?) can delete the userB's files, but not vice-versa. This is fine, as long as userB can create files inside the directory of userA. So: What is the correct mount point? Is plugdev the correct group? Most importantly, how to set up permissions to maintain an organised media drive? I do not want to be running cron jobs to set permissions regularly!

    Read the article

  • How to efficiently protect part of an application with a license

    - by Patrick
    I am working on an application that has many functional parts. When a customer buys the application, he buys the standard functionality, but he can also buy some additional elements of the application for an additional price. All of the elements are part of the same application executable. A license key is used to indicate which of the elements should be accessible in the application. Some of the elements can be easily disabled if the user didn't pay for it. These are typically the modules that you can access via the application's menu. However, some elements give more problems: What if a part of the data model is related to an optional part? Do I build up these data structures in my application so the rest of my application can just assume they're always there? Or do I don't build them, and add checks in the rest of may application? What if some optional part is still useful to perform some internal tasks, but I don't want to expose it to the user externally? What if the marketing responsible wants to make a standard part now an optional part? In all of my application I assume that that part is present, but if it becomes optional, I should add checks on it everywhere in the application. I have some ideas on how to solve some of the problems (e.g. interfaces with dual implementations: one working implementation, and one that is activated if the optional part is not activated). Do you know of any patterns that can be used to solve this kind of problem? Or do you have any suggestions on how to handle this licensing problem? Thanks.

    Read the article

  • Service layer coupling

    - by Justin
    I am working on writing a service layer for an order system in php. It's the typical scenario, you have an Order that can have multiple Line Items. So lets say a request is received to store a line item with pictures and comments. I might receive a json request such as { 'type': 'Bike', 'color': 'Red', 'commentIds': [3193,3194] 'attachmentIds': [123,413] } My idea was to have a Service_LineItem_Bike class that knows how to take the json data and store an entity for a bike. My question is, the Service_LineItem class now needs to fetch comments and file attachments, and store the relationships. Service_LineItem seems like it should interact with a Service_Comment and a Service_FileUpload. Should instances of these two other services be instantiated and passed to the Service_LineItem constructor,or set by getters and setters? Dependency injection seems like the right solution, allowing a service access to a 'service fetching helper' seems wrong, and this should stay at the application level. I am using Doctrine 2 as a ORM, and I can technically write a dql query inside Service_LineItem to fetch the comments and file uploads necessary for the association, but this seems like it would have a tighter coupling, rather then leaving this up to the right service object.

    Read the article

  • Hack Fest at Devoxx

    - by Yolande Poirier
    On November 11th and 12th, Devoxx attendees will get the chance to build a Java embedded application onsite. During the Raspberry Pi & Leap Motion hands-on labs on Monday and Tuesday mornings, you will learn about Raspberry Pi development with Java embedded using Leap Motion and other sensors. The afternoons are hacking time on a project of your choice. You can get your inspiration from existing projects. You can also use their project source code and improve on already developed applications.  The goal is for you to create something fun and innovative in only a couple of days, no matter your experience in embedded systems.  We provide you with equipment like the Raspberry Pi, sensors, and Leap Motion. Thanks to Stephan Janssen for lending us 10 Leap Motions for the Hack Fest. Raspberry Pi and sensors are pre-configured. You will access the sensors via a web address. You can build a project alone if you want. We also give the opportunity to brainstorm ideas with other attendees and maybe build something more complex. You will get one-on-one help from top-notch coaches. Vinicius Senger has tons of experience with Java and the Raspberry. He runs Java embedded challenges and give training year round. Geert Bevin contributed to many open source projects and his latest venture is with the Leap Motion. Bruno Borges's expertise is in connecting backend logic with great interfaces. Yara Senger is a Java Champion and a great Java embedded mentor.    Don't miss this opportunity! This is your chance to transform your idea into a Raspberry Pi or a Leap Motion application.

    Read the article

  • How do I find information on who links to my sites?

    - by bobdobbs
    I'm trying to figure out if there's a free way to get information on backlinks to my site. I've had webmaster tools and google analytics set up for years. But I can't find access to data about site backlinks in either toolset. Webmaster tools, under 'traffic'-'links to your site' gives me the same message for all of my sites: "No data available". I haven't been able to find anything in GA that gives any information on backlinks. I've heard of using "links:" as an operator in google search, but for each of my sites, this returns either zero or very few results in cases when I know I have many backlinks. Most of the links simple aren't shown. My thinking is that google maintains a graph of who links to my site, so I figured that they might let me see it. But I can't figure out how. I've found this tool on a spammy website: http://www.backlinkwatch.com. It offers more data than google on my backlines, and offers more results in exchange for a paid subscription. The data it offers for free looks good, but the results are limited and the site has popups and obnoxious ads. So, in short: how do I get data on who links to me? Is there a free way?

    Read the article

  • Thoughts (on Windows Phone 7) from the MVP Summit

    - by Chris Williams
    Last week I packed off to Redmond, WA for my annual pilgrimage to Microsoft's MVP Summit. I'll spare you all the silly taunting about knowing stuff I can't talk about, etc... and just get to the point. I'm a XNA/DirectX MVP, an ASP Insider and a Languages (VB) Insider... so I actually had access to a pretty broad spectrum of information over the last week. Most of my time was focused on Windows Phone 7 related sessions, and while I can't dig deep into specifics, I can say that Microsoft is definitely not out of the fight for Mobile. The things I saw tell me that Microsoft is listening and paying attention to feedback, looking at what works & what doesn't and they are working their collective asses off to close the gap between Google and Apple. Anyone who has been in this industry for a while can tell you Microsoft does their best work when they are the underdog. They are currently behind, and have a lot of work ahead of them, but this is when they bring all their resources together to solve a problem. After the week I spent in Redmond, and the feedback I heard from other MVPs, and the technological previews I saw... I feel confident in betting heavily on Microsoft to pull this off.

    Read the article

  • Providing SSH tunnling, what to think about when configuring Ubuntu Server

    - by bigbadonk420
    Recently I've considered, mostly as a pet project, to set up accounts for a closed group of users via SSH to my box with the purpose of SSH tunnling things like web traffic -- some of it for friends that live abroad and perhaps also to help some people bypass national censorship. There's some things I imagine that I need to do, such as: Disabling shell access by setting the shell to /bin/false or similar. Get some software that can track bandwidth usage on a per-user basis historically Make sure that each user can only use a certain amount of bandwidth. The reason I'm posting here to begin with is to look around and get some pointers regarding what kind of things I should read up on, as well as hearing if there are any software recommendations for doing what I'm trying to do. I already know a bit since I've actually gotten SSH tunnling up and running already, I just don't feel like letting it loose to other people without restrictions and some basic monitoring. I'm primarily trying to learn here, so if you think this is a Very Bad Idea (or if you have a better idea on how to do this) then by all means say so, but please include some information on how to do it :) (I'm also open to trying things like OpenVPN but it seems really hard to set up, also I've heard SSH more often works in locked down environments)

    Read the article

  • How is determined an impact of a requirement change on the existing code?

    - by MainMa
    Hi, How companies working on large projects evaluate an impact of a single modification on an existing code? Since my question is probably not very clear, here's an example: Let's take a sample business application which deals with tasks. In the database, each task has a state, 0 being "Pending", ... 5 - "Finished". A new requirement adds a new state, between 2nd and 3rd one. It means that: A constraint on the values 1 - 5 in the database must be changed, Business layer and code contracts must be changed to add a new state, Data access layer must be changed to take in account that, for example the state StateReady is now 6 instead of 5, etc. The application must implement a new state visually, add new controls for it, new localized strings for tool-tips, etc. When an application is written recently by one developer, it's more or less easy to predict every change to do. On the other hand, when an application was written for years by many people, no single person can anticipate every change immediately, without any investigation. So since this situation (such changes in requirements) is very frequent, I imagine there are already some clever techniques and ways to predict the impact. Is there any? Do you know any books which deal about this subject? Note: my question is not related to How do you deal with changing requirements? question. In fact, I'm not interested in evaluating the cost of a change, but rather the way to predict the parts of an application which will be concerned by the change. What will be those changes and how difficult they are really doesn't matter in my question.

    Read the article

  • Easily Add Program Shortcuts to the Desktop Context Menu in Windows 7

    - by Lori Kaufman
    If you use the Desktop context menu often, wouldn’t it be useful if you could add program shortcuts to it so you can quickly access your favorite apps? We’ve shown you how to do this using a quick registry tweak, but there’s an easier way. DeskIntegrator is a free, portable program that allows you to quickly and easily add applications to the Desktop context menu. It does not need to be installed. Extract the program files from the .zip file you downloaded (see the link at the end of the article) to a location on your hard drive. NOTE: This article shows you how to use DeskIntegrator in Windows 7, but we tested it in Windows 8 Release Preview and it worked there as well. To use DeskIntegrator, you must run it as administrator. Right-click on the DeskIntegrator.exe file and select Run as administrator from the popup menu. HTG Explains: What Is Two-Factor Authentication and Should I Be Using It? HTG Explains: What Is Windows RT and What Does It Mean To Me? HTG Explains: How Windows 8′s Secure Boot Feature Works & What It Means for Linux

    Read the article

  • Best design for a "Command Executer" class

    - by Justin984
    Sorry for the vague title, I couldn't think of a way to condense the question. I am building an application that will run as a background service and intermittently collect data about the system its running on. A second Android controller application will query the system over tcp/ip for statistics about the system. Currently, the background service has a tcp listener class that reads/writes bytes from a socket. When data is received, it raises an event to notify the service. The service takes the bytes, feeds them into a command parser to figure out what is being requested, and then passes the parsed command to a command executer class. When the service receives a "query statistics" command, it should return statistics over the tcp/ip connection. Currently, all of these classes are fully decoupled from each other. But in order for the command executer to return statistics, it will obviously need access to the socket somehow. For reasons I can't completely articulate, it feels wrong for the command executer to have a direct reference to the socket. I'm looking for strategies and/or design patterns I can use to return data over the socket while keeping the classes decoupled, if this is possible. Hopefully this makes sense, please let me know if I can include any info that would make the question easier to understand.

    Read the article

  • Work Execution in EAM

    - by Annemarie Provisero
    ADVISOR WEBCAST: Work Execution in EAM PRODUCT FAMILY: Manufacturing Enterprise Asset Management July 5, 2011 at 8 am PT, 9 am MT, 11 am ET The purpose of this webcast is to discuss EAM Work Order Management. This one-hour session is ideal for Functional Users, System Administrators, Database Administrators, and Customers with a basic knowledge of EAM and who raise or manage work orders and related processes. During this webcast, Zar will cover the various types of work orders and look at all the related activities associated with work orders including: setup, operations, tasks, work order transactions, relationship and planning. TOPICS WILL INCLUDE: Work Order Types (Routine, Planned Maintenance, Rebuild, Easy) Work Order statuses and other important setups Operations and Tasks Relationships Work Order Transactions Work Order Planning A short, live demonstration (only if applicable) and question and answer period will be included. Oracle Advisor Webcasts are dedicated to building your awareness around our products and services. This session does not replace offerings from Oracle Global Support Services. Click here to register for this session ------------------------------------------------------------------------------------------------------------- The above webcast is a service of the E-Business Suite Communities in My Oracle Support. For more information on other webcasts, please reference the Oracle Advisor Webcast Schedule.Click here to visit the E-Business Communities in My Oracle Support Note that all links require access to My Oracle Support.

    Read the article

  • Hyper-V Manager version 6.2, an experience in virtual switch setup

    - by Kevin Shyr
    The version number of Hyper-V manager is 6.2.9200.16384   This is what came with my Windows 8 work laptop (by enabling Windows features) The blogs I read indicated that I need an external switch for my guest OS to access internet, and an internal one for them to share folder with my Host OS.  I proceeded to create an external virtual switch, and here is the screenshot. After setting up the network adapters on the guest OS, I peeked into host OS networking, and saw that Network Bridge was already created.  GREAT!  So I fired up my guest OS and darn, no internet.  Then I noticed that my host internet was gone, too.  I looked further and found that even though I have a network bridge, no connection has the status "Bridged"Once I removed the bridge (by removing individual connection from the bridge, I know, weird, since none of them say "Bridged" in status)  I re-selected the connection that I want and add them to the bridge to create a new network bridge.  Once my wireless connection status shows "Bridged", I was able to get to internet from my guest OS.Two things I noticed after I got internet for everyone ( my host and guest OS):My network adapters in the host OS no longer shows "Bridged", but everyone can still get to the internetThe virtual switch that I set up for "External" is now showing to be "Internal", and I was able to create shared folder between host and guest OS.  This means I didn't have to create the other "Internal" virtual switch.

    Read the article

  • Centralizing a resource file among multiple projects in one solution (C#/WPF)

    - by MarkPearl
    One of the challenges one faces when doing multi language support in WPF is when one has several projects in one solution (i.e. a business layer & ui layer) and you want multi language support. Typically each solution would have a resource file – meaning if you have 3 projects in a solution you will have 3 resource files.   For me this isn’t an ideal solution, as you normally want to send the resource files to a translator and the more resource files you have, the more fragmented the dictionary will be and the more complicated it will be for the translator. This can easily be overcome by creating a single project that just holds your translation resources and then exposing it to the other projects as a reference as explained in the following steps. Step 1 Step 1 -  Add a class library to your solution that will contain just the resource files. Your solution will now have an additional project as illustrated below. Step 2 Reference this project to the other projects. Step 3 Move all the resources from the other resource files to the translation projects resource file. Step 4 Set the translations projects resource files access modifier to public. Step 5 Reference all other projects to use the translation resource file instead of their local resource file. To do this in xaml you would need to expose the project as a namespace at the top of the xaml file… note that the example below is for a project called MaxCutLanguages – you need to put the correct project name in its place.   xmlns:MaxCutLanguages="clr-namespace:MaxCutLanguages;assembly=MaxCutLanguages"   And then in the actual xaml you need to replace any text with a reference to the resource file. <TextBlock Text="{x:Static MaxCutLanguages:Properties.Resources.HelloWorld}"/> End Result You can now delete all the resource files in the other projects as you now have one centralized resource file.

    Read the article

  • eSTEP Newsletter for the technical EMEA partner community

    - by mseika
    We are pleased to present to you the first issue of the eSTEP Newsletter, which is dedicated to support the technical EMEA partner community in the effort to provide more information on what is going on within the corporation, what is the technical news regarding Hardware, events and all the important things which we think may be of interest to you. Invitation: STEP TechCast: Oracle Solaris 11 Express Get an insight on how Oracle Solaris 11 Express has raised the bar on the innovation introduced in Oracle Solaris 10. Learn about the new integrated features such as: network based package management tools improvements to built-in virtualization new virtualised network architecture security enhancements file system evolution  Learn how Oracle Solaris 11 Express provides greatly decreased planned system downtime, performs a completely safe system upgrade, achieves an unprecedented level of flexibility for application consolidation, and provides the highest levels of security in your datacenter. Date and time: Thursday, 7. July 2011, 13:00 - 14:00 CEST Speaker: Joost Pronk van Hoogeveen Target audience: Tech Presales Webcast Coordinates: You will find the coordinates in the eSTEP portal under the Events tab. Use your email-adress and PIN: eSTEP_2011 to get access. We are happy to get your comments and feedback.

    Read the article

  • Notification framework for object lifecycle

    - by rlandster
    I am looking for an application, framework, or library that would help us with "object life-cycle management". There are many things that are created for users, departments, and services that, all too often, are left unmanaged. Some examples: user accounts groups SSL certificates access rights databases software license provisionings storage list-serve accounts These objects are created and managed by a wide variety of applications and systems. Typically, a user (person) requests (either explicitly or implicitly) one of these objects. A centralized management tool would help us manage such administration chores as: What objects does user X currently own/manage? Move the ownership of object P to user X; move all objects owned by user X (who was just been fired) to user Y. For all objects of type T that have expired be sure the objects have been disabled or deleted by their provider. How many active (expired, about-to-expire) objects of type P are there? Send periodic notifications to all users who own active objects of type P reminding them of what they own. There is a security alert for objects of type P; send a notification to all users who own these types of objects to take a specific remedial action. Delete or disable a set of objects based on expiration (or some other criteria). These objects are directly managed through their own applications (Active Directory, MySql, file systems, etc.) and may even have their own notification systems, but I want to centralize this into an "object management system". The OMS should allow the association with an external identity provider that defines who the users and groups are (e.g., LDAP, Active Directory) creation of objects association of an object to a specific user and/or group association with an expiration date creation of flexible reporting including letting users know what objects they currently own and their expiration dates integration with an external object "provider" via a plug-in We could write something from scratch, but I am hoping there is something already out there that will help, either an entire application or a set of libraries that provide much of what is needed. Any ideas?

    Read the article

  • What are my choices for server side sandboxed scripting?

    - by alfa64
    I'm building a public website where users share data and scripts to run over some data. The scripts are run serverside in some sort of sandbox without other interaction this cycle: my Perl program reads from a database a User made script, adds the data to be processed into the script ( ie: a JSON document) then calls the interpreter, it returns the response( a JSON document or plain text), i save it to the database with my perl script. The script should be able to have some access to built in functions added to the scripting language by myself, but nothing more. So i've stumbled upon node.js as a javascript interpreter, and and hour or so ago with Google's V8(does v8 makes sense for this kind of thing?). CoffeeScript also came to my mind, since it looks nice and it's still Javascript. I think javascript is widespread enough and more "sandboxeable" since it doesn't have OS calls or anything remotely insecure ( i think ). by the way, i'm writing the system on Perl and Php for the front end. To improve the question: I'm choosing Javascript because i think is secure and simple enough to implement with node.js, but what other alternatives are for achieving this kind of task? Lua? Python? I just can't find information on how to run a sandboxed interpreter in a proper way.

    Read the article

< Previous Page | 1080 1081 1082 1083 1084 1085 1086 1087 1088 1089 1090 1091  | Next Page >