Search Results

Search found 36871 results on 1475 pages for 'installed applications'.

Page 34/1475 | < Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >

  • Les Office Web Applications officiellement annoncées par Microsoft, la version gratuite en ligne d'O

    Mise à jour du 17/05/10 Les Office Web Applications officiellement annoncées par Microsoft La version gratuite en ligne d'Office 2010 concurrencera les Google Docs Le 15 juin prochain, la version gratuite en ligne de Word, Excel, PowerPoint et OneNote sera disponible pour tous. L'annonce vient d'être faite officiellement par Chris Capossela, cadre chez Microsoft, lors d'un gala à New-York la semaine dernière. Cette date correspond également au lancement de la suite complète en version desktop de Microsoft Office 2010. Pour accéder aux Office Web Apps, il faudra disposer d'un compte Windows Live et accepter la publicité....

    Read the article

  • Cognizant: commited in Oracle Fusion Applications and Oracle Cloud

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Cognizant is a Global System Integrator strongly committed in Oracle Fusion Applications and Oracle Cloud, offering fixed scope implementation. In this short video, you can learn more about Cognizant strategy, experience and offerings Cognizant is Platinum Partner specialized in several Oracle Fusion Cloud Service areas /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";}

    Read the article

  • Track updated/inserted entities in LINQ to SQL applications

    - by nikolaosk
    In this post I would like to discuss in further detail the issue of track changing of entities in LINQ to SQL applications. I would like to show you how the DataContext object keeps track of all the items that are updated,deleted or inserted in the underlying data store. If you want to have a look at my other post about LINQ to SQL and transactions click here . I am going to demonstrate this with a hands on example. I assume that you have access to a version of SQL Server and Northwind database....(read more)

    Read the article

  • Developing applications for iPhone, BlackBerry and J2ME in ASP.Net

    Hi, I would like to introduce you a new mobile application framework iPFaces for developing native, form-oriented applications. The aim of the solution is to screen the programmer completely out from the mobile platform itself, and transfer the entire application logic to the central application server level. Developers with experience with one of the supported Web technologies (ASP.Net, Java, and PHP) may start working with iPFaces virtually immediately. For more information, see the project...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Microsoft lance de nouvelles initiatives pour aider les développeurs à créer leurs applications et leurs entreprises

    Microsoft va lancer de nouvelles initiatives pour aider les développeurs A créer leurs applications et leurs entreprises Lors de sa rentrée des classes, Microsoft France a fait le point sur l'ensemble des évènements qu'il propose ? et qu'il va proposer ? en rapport avec l'innovation technologique et le développement. Ces initiatives concernent aussi bien les tous petits ? avec une classe numérique ? que les professionnels aguerris avec des Master Class (Dev Camp, accélérateurs, etc). Comme elles sont nombreuses et foisonnantes, il nous a paru intéressant de faire un point récapitulatif. Commençons par les plus petits. Cette année, Microsoft v...

    Read the article

  • Developing applications for iPhone, BlackBerry and J2ME in ASP.Net

    Hi, I would like to introduce you a new mobile application framework iPFaces for developing native, form-oriented applications. The aim of the solution is to screen the programmer completely out from the mobile platform itself, and transfer the entire application logic to the central application server level. Developers with experience with one of the supported Web technologies (ASP.Net, Java, and PHP) may start working with iPFaces virtually immediately. For more information, see the project...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Installing Silverlight applications without the browser involved

    One of the features we are introducing in Silverlight 4 is a silent install mechanism for out-of-browser applications. Currently every out-of-browser application (trusted or not) starts from an in-browser mechanism. In some instances where you want to deploy the app via managed desktop software or perhaps via CD-ROM, you dont want to have to tell the user to start on an HTML page first. Now Im not going to write here about the merits of why you might want to do this other than to point out what...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Quickly And Easily Download And Install Applications With InstallPad

    InstallPad is one of those products that I ran across some time back and I had forgotten about it until recently. If you have not heard of this yourself stay tuned because it can greatly assist you when it comes to the are of software installations. If you stop and think about it how many [...] Related posts:Must Have Blackberry Applications Microsoft 2010 Betas Available For Download Vista SP2 and Windows Server 2008 SP2 hits the street ...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle Solaris Preflight Applications Checker 11.2 now available

    - by CarylTakvorian-Oracle
    ISV Engineering is happy to announce the release of the latest version of our Solaris Preflight Checker tool supporting Solaris 11.2. which is now available for download. The Solaris Preflight Checker enables a developer to determine the Oracle Solaris 11.2 readiness of an application by analyzing a working application on Oracle Solaris 10. A successful check with this tool will be a strong indicator that an application will run unmodified on the latest Oracle Solaris 11.This release includes: Updated symbol database which will help migration from Solaris 10 to Solaris 11.2 Kernel binary and source scanners that now detects, usage of "data structures" changed between Solaris 10 and Solaris 11.2 An application analyzer, which looks for usage of specific Solaris features and recommends better ways of implementing the same on Solaris 11.2   e.g. suitability of high performance libraries shipped with Solaris, crypto offload for Java & C based applications,  etc. And bug fixes

    Read the article

  • Ubuntu 14.04 LTS AMD64 randomly uninstals packages / applications

    - by Jacob Lindeen
    I am having a strange issue. Occasionally when installing new packages, I will get a system crash notification which I will report. The environment then becomes unstable, unity crashes and all open windows, loose title bars, and the main launcher shuts down. I usually have to run the command unity --replace to restore functionality, but ultimately end up having to reboot. Upon booting back up I find all of my user-installed packages are gone and I have to reinstall them via apt. Please tell me someone else is having this issue, because this is the second system and third install I have had this issue on.

    Read the article

  • SOLVED: Breaking parent web.config dependencies in sub applications

    This article explains how to implement a sub application such as a blog in your website without experiencing dependency issues. A common problem that developers experience is when their sub applications accidentally inherit requirements of the parent website. This is actually by design but read on if this is causing problems in your site. Scenario This problem has caught me out a couple of times so far but usually with enough of a gap between occurrences that it had become just a fuzzy memory....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • How to programatically get the list of installed programs

    - by rajivpradeep
    Hi, I am creating a program which first checks whether a particular program has been installed or not, if it's installed it continues to execute other code, if it's not installed then it installs the application and then proceeds to execute the other code. How do i check programatically in VC++ that the application has been installed or not

    Read the article

  • apt-get install fuse - MAKEDEV not installed, skipping device node creation

    - by holms
    This happened with command apt-get dist-upgrade to upgrade to debian jessie, after which I've tried to remove fuse, and install it again. Same error: root@msgapp:/dev# apt-get install fuse Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: fuse 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/69.9 kB of archives. After this operation, 191 kB of additional disk space will be used. Selecting previously unselected package fuse. (Reading database ... 39354 files and directories currently installed.) Preparing to unpack .../fuse_2.9.3-10_amd64.deb ... Unpacking fuse (2.9.3-10) ... Processing triggers for man-db (2.6.7.1-1) ... Setting up fuse (2.9.3-10) ... MAKEDEV not installed, skipping device node creation. device node not found dpkg: error processing package fuse (--configure): subprocess installed post-installation script returned error exit status 2 Errors were encountered while processing: fuse E: Sub-process /usr/bin/dpkg returned an error code (1) UPDATE Reinstalling makedev gives another problem: root@msgapp:/dev# apt-get install makedev Reading package lists... Done Building dependency tree Reading state information... Done The following NEW packages will be installed: makedev 0 upgraded, 1 newly installed, 0 to remove and 0 not upgraded. Need to get 0 B/42.6 kB of archives. After this operation, 129 kB of additional disk space will be used. Selecting previously unselected package makedev. (Reading database ... 39347 files and directories currently installed.) Preparing to unpack .../makedev_2.3.1-93_all.deb ... Unpacking makedev (2.3.1-93) ... Processing triggers for man-db (2.6.7.1-1) ... ySetting up makedev (2.3.1-93) ... /run/udev or .udevdb or .udev presence implies active udev. Aborting MAKEDEV invocation. /run/udev or .udevdb or .udev presence implies active udev. Aborting MAKEDEV invocation. /run/udev or .udevdb or .udev presence implies active udev. Aborting MAKEDEV invocation. There's ticket raised, and their fix doesn't give any result: root@msgapp:/dev# cd /dev && ./MAKEDEV fuse /run/udev or .udevdb or .udev presence implies active udev. Aborting MAKEDEV invocation.

    Read the article

  • Teeing Off With Chris Leone at OpenWorld 2012

    - by Kathryn Perry
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Chris Leone, Senior Vice President, Oracle Applications Development Monday morning in downtown San Francisco - lots of sunshine, plenty of traffic, and sidewalks chocked full of people with fresh faces and blister free feet. Let the week of Oracle OpenWorld begin! For a great Applications start, Chris Leone packed the house with his Fusion Applications overview session - he covered strategy, scope, roadmaps, and customer successes. Fusion Apps, the world's best SaaS suite, is built on 100 percent standards. Chris talked about its information driven user experience, its innovative design, and the choice of deployment. People can run Fusion in the cloud, in a managed / hosted environment, or on premise -- or they can use a combination of these three models. About seventy percent of our customers go with SaaS. Release 5 of Fusion Apps will become available soon. The cadence of releases will be three times a year. The key drivers are to accelerate business success (no rip and replace) and to simplify business processes. Chris told the audience that organic Fusion is the centerpiece of our cloud solutions, rounded out with acquired offerings such as Taleo Recruiting and RightNow Customer Service. From the cloud solutions, customers can expect real time and predictive BI, social capabilities, choice of deployment, and more productivity because of a next generation UX called FUSE. Chris's demo showed a super easy, new UI that touts self service navigation. We'll blog about FUSE in the very near future. Chris said the next 365 days of Fusion Apps would include more localization, more industries, more power, more mobile, and more configurability. The audience was challenged to think hard about how Fusion could be part of their three-to-five year plans. Chris set up a great opportunity for you to follow up with your customers as they explore the possibilities.

    Read the article

  • kubuntu muon package manager stop working

    - by aseed
    i have kubuntu today after updating the muon package manager stuck at 64% so i closes it. and after that when i try to update or reinstall or install software the manger stuck. so how can i reinstall the muon package manger from terminal?? i try sudo apt-get install muon and i get this messege Reading package lists... Done Building dependency tree Reading state information... Done muon is already the newest version. You might want to run 'apt-get -f install' to correct these: The following packages have unmet dependencies: libopencv-dev : Depends: libopencv-core-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-ml-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-imgproc-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-video-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-objdetect-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-gpu-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-highgui-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-calib3d-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-flann-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-features2d-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-legacy-dev (= 2.3.1-4ppa1) but it is not going to be installed Depends: libopencv-contrib-dev (= 2.3.1-4ppa1) but it is not going to be installed E: Unmet dependencies. Try 'apt-get -f install' with no packages (or specify a solution). so what to do, i need to reinstall it because it not working ~$ sudo dpkg --configure -a dpkg: dependency problems prevent configuration of libopencv-dev: libopencv-dev depends on libopencv-core-dev (= 2.3.1-4ppa1); however: Package libopencv-core-dev is not installed. libopencv-dev depends on libopencv-ml-dev (= 2.3.1-4ppa1); however: Package libopencv-ml-dev is not installed. libopencv-dev depends on libopencv-imgproc-dev (= 2.3.1-4ppa1); however: Package libopencv-imgproc-dev is not installed. libopencv-dev depends on libopencv-video-dev (= 2.3.1-4ppa1); however: Package libopencv-video-dev is not installed. libopencv-dev depends on libopencv-objdetect-dev (= 2.3.1-4ppa1); however: Package libopencv-objdetect-dev is not installed. libopencv-dev depends on libopencv-gpu-dev (= 2.3.1-4ppa1); however: Package libopencv-gpu-dev is not installed. libopencv-dev depends on libopencv-highgui-dev (= 2.3.1-4ppa1); however: Package libopencv-highgui-dev is not installed. libopencv-dev depends on libopencv-calib3d-dev (= 2.3.1-4ppa1); however: Package libopencv-calib3d-dev is not installed. libopencv-dev depends on libopencv-flann-dev (= 2.3.1-4ppa1); however: Package libopencv-flann-dev is not installed. libopencv-dev depends on libopencv-features2d-dev (= 2.3.1-4ppa1); however: Package libopencv-features2d-dev is not installed. libopencv-dev depends on libopencv-legacy-dev (= 2.3.1-4ppa1); however: Package libopencv-legacy-dev is not installed. libopencv-dev depends on libopencv-contrib-dev (= 2.3.1-4ppa1); however: Package libopencv-contrib-dev is not installed. dpkg: error processing libopencv-dev (--configure): dependency problems - leaving unconfigured Errors were encountered while processing: libopencv-dev sudo apt-get install -f sudo dpkg --configure -a and still same problem... and i think getting this problem because of updating kubuntu today

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by ken.pulverman
      ERP has been the backbone of enterprise software.  The data held in your ERP system is core of most companies.  Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years.  Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor.   Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough."  So your ERP is working.  It's humming along.  You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere.   Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on.  And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™.   SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications.  Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise.  In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure the Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution.  It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do.  The diagram below reflects that change.    In this model, the hegemony of ERP is over.  It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information.  And through modern middleware it will connect to everything.  So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • Why Your ERP System Isn't Ready for the Next Evolution of the Enterprise

    - by [email protected]
    By ken.pulverman on March 24, 2010 8:51 AM ERP has been the backbone of enterprise software. The data held in your ERP system is core of most companies. Efficiencies gained through the accounting and resource allocation through ERP software have literally saved companies trillions of dollars. Not only does everything seem to be fine with your ERP system, you haven't had to touch it in years. Why aren't you ready for what comes next? Well judging by the growth rates in the space (Oracle posted only a 3% growth rate, while SAP showed a 12% decline) there hasn't been much modernization going on, just a little replacement activity. If you are like most companies, your ERP system is connected to a proprietary middleware solution that only effectively talks with a handful of other systems you might have acquired from the same vendor. Connecting your legacy system through proprietary middleware is expensive and brittle and if you are like most companies, you were only willing to pay an SI so much before you said "enough." So your ERP is working. It's humming along. You might not be able to get Order to Promise information when you take orders in your call center, but there are work arounds that work just fine. So what's the problem? The problem is that you built your business around your ERP core, and now there is such pressure to innovate your business processes to keep up that you need a whole new slew of modern apps and you need ERP data to be accessible from everywhere. Every time you change a sales territory or a comp plan or change a benefits provider your ERP system, literally the economic brain of your business, needs to know what's going on. And this giant need to access and provide information to your ERP is only growing. What makes matters even more challenging is that apps today come in every flavor under the Sun™. SaaS, cloud, managed, hybrid, outsourced, composite....and they all have different integration protocols. The only easy way to get ahead of all this is to modernize the way you connect and run your applications. Unlike the middleware solutions of yesteryear, modern middleware is effectively the operating system of the enterprise. In the same way that you rely on Apple, Microsoft, and Google to find a video driver for your 23" monitor or to ensure that Word or Keynote runs, modern middleware takes care of intra-application connectivity and process execution. It effectively allows you to take ERP out of the middle while ensuring connectivity to your vital data for anything you want to do. The diagram below reflects that change. In this model, the hegemony of ERP is over. It too has to become a stealthy modern app to help you quickly adapt to business changes while managing vital information. And through modern middleware it will connect to everything. So yes ERP as we've know it is dead, but long live ERP as a connected application member of the modern enterprise. I want to Thank Andrew Zoldan, Group Vice President Oracle Manufacturing Industries Business Unit for introducing me to how some of his biggest customers have benefited by modernizing their applications infrastructure and making ERP a connected application. by John Burke, Group Vice President, Applications Business Unit

    Read the article

  • SQL SERVER – Faster SQL Server Databases and Applications – Power and Control with SafePeak Caching Options

    - by Pinal Dave
    Update: This blog post is written based on the SafePeak, which is available for free download. Today, I’d like to examine more closely one of my preferred technologies for accelerating SQL Server databases, SafePeak. Safepeak’s software provides a variety of advanced data caching options, techniques and tools to accelerate the performance and scalability of SQL Server databases and applications. I’d like to look more closely at some of these options, as some of these capabilities could help you address lagging database and performance on your systems. To better understand the available options, it is best to start by understanding the difference between the usual “Basic Caching” vs. SafePeak’s “Dynamic Caching”. Basic Caching Basic Caching (or the stale and static cache) is an ability to put the results from a query into cache for a certain period of time. It is based on TTL, or Time-to-live, and is designed to stay in cache no matter what happens to the data. For example, although the actual data can be modified due to DML commands (update/insert/delete), the cache will still hold the same obsolete query data. Meaning that with the Basic Caching is really static / stale cache.  As you can tell, this approach has its limitations. Dynamic Caching Dynamic Caching (or the non-stale cache) is an ability to put the results from a query into cache while maintaining the cache transaction awareness looking for possible data modifications. The modifications can come as a result of: DML commands (update/insert/delete), indirect modifications due to triggers on other tables, executions of stored procedures with internal DML commands complex cases of stored procedures with multiple levels of internal stored procedures logic. When data modification commands arrive, the caching system identifies the related cache items and evicts them from cache immediately. In the dynamic caching option the TTL setting still exists, although its importance is reduced, since the main factor for cache invalidation (or cache eviction) become the actual data updates commands. Now that we have a basic understanding of the differences between “basic” and “dynamic” caching, let’s dive in deeper. SafePeak: A comprehensive and versatile caching platform SafePeak comes with a wide range of caching options. Some of SafePeak’s caching options are automated, while others require manual configuration. Together they provide a complete solution for IT and Data managers to reach excellent performance acceleration and application scalability for  a wide range of business cases and applications. Automated caching of SQL Queries: Fully/semi-automated caching of all “read” SQL queries, containing any types of data, including Blobs, XMLs, Texts as well as all other standard data types. SafePeak automatically analyzes the incoming queries, categorizes them into SQL Patterns, identifying directly and indirectly accessed tables, views, functions and stored procedures; Automated caching of Stored Procedures: Fully or semi-automated caching of all read” stored procedures, including procedures with complex sub-procedure logic as well as procedures with complex dynamic SQL code. All procedures are analyzed in advance by SafePeak’s  Metadata-Learning process, their SQL schemas are parsed – resulting with a full understanding of the underlying code, objects dependencies (tables, views, functions, sub-procedures) enabling automated or semi-automated (manually review and activate by a mouse-click) cache activation, with full understanding of the transaction logic for cache real-time invalidation; Transaction aware cache: Automated cache awareness for SQL transactions (SQL and in-procs); Dynamic SQL Caching: Procedures with dynamic SQL are pre-parsed, enabling easy cache configuration, eliminating SQL Server load for parsing time and delivering high response time value even in most complicated use-cases; Fully Automated Caching: SQL Patterns (including SQL queries and stored procedures) that are categorized by SafePeak as “read and deterministic” are automatically activated for caching; Semi-Automated Caching: SQL Patterns categorized as “Read and Non deterministic” are patterns of SQL queries and stored procedures that contain reference to non-deterministic functions, like getdate(). Such SQL Patterns are reviewed by the SafePeak administrator and in usually most of them are activated manually for caching (point and click activation); Fully Dynamic Caching: Automated detection of all dependent tables in each SQL Pattern, with automated real-time eviction of the relevant cache items in the event of “write” commands (a DML or a stored procedure) to one of relevant tables. A default setting; Semi Dynamic Caching: A manual cache configuration option enabling reducing the sensitivity of specific SQL Patterns to “write” commands to certain tables/views. An optimization technique relevant for cases when the query data is either known to be static (like archive order details), or when the application sensitivity to fresh data is not critical and can be stale for short period of time (gaining better performance and reduced load); Scheduled Cache Eviction: A manual cache configuration option enabling scheduling SQL Pattern cache eviction based on certain time(s) during a day. A very useful optimization technique when (for example) certain SQL Patterns can be cached but are time sensitive. Example: “select customers that today is their birthday”, an SQL with getdate() function, which can and should be cached, but the data stays relevant only until 00:00 (midnight); Parsing Exceptions Management: Stored procedures that were not fully parsed by SafePeak (due to too complex dynamic SQL or unfamiliar syntax), are signed as “Dynamic Objects” with highest transaction safety settings (such as: Full global cache eviction, DDL Check = lock cache and check for schema changes, and more). The SafePeak solution points the user to the Dynamic Objects that are important for cache effectiveness, provides easy configuration interface, allowing you to improve cache hits and reduce cache global evictions. Usually this is the first configuration in a deployment; Overriding Settings of Stored Procedures: Override the settings of stored procedures (or other object types) for cache optimization. For example, in case a stored procedure SP1 has an “insert” into table T1, it will not be allowed to be cached. However, it is possible that T1 is just a “logging or instrumentation” table left by developers. By overriding the settings a user can allow caching of the problematic stored procedure; Advanced Cache Warm-Up: Creating an XML-based list of queries and stored procedure (with lists of parameters) for periodically automated pre-fetching and caching. An advanced tool allowing you to handle more rare but very performance sensitive queries pre-fetch them into cache allowing high performance for users’ data access; Configuration Driven by Deep SQL Analytics: All SQL queries are continuously logged and analyzed, providing users with deep SQL Analytics and Performance Monitoring. Reduce troubleshooting from days to minutes with database objects and SQL Patterns heat-map. The performance driven configuration helps you to focus on the most important settings that bring you the highest performance gains. Use of SafePeak SQL Analytics allows continuous performance monitoring and analysis, easy identification of bottlenecks of both real-time and historical data; Cloud Ready: Available for instant deployment on Amazon Web Services (AWS). As you can see, there are many options to configure SafePeak’s SQL Server database and application acceleration caching technology to best fit a lot of situations. If you’re not familiar with their technology, they offer free-trial software you can download that comes with a free “help session” to help get you started. You can access the free trial here. Also, SafePeak is available to use on Amazon Cloud. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Integrating Coherence & Java EE 6 Applications using ActiveCache

    - by Ricardo Ferreira
    OK, so you are a developer and are starting a new Java EE 6 application using the most wonderful features of the Java EE platform like Enterprise JavaBeans, JavaServer Faces, CDI, JPA e another cool stuff technologies. And your architecture need to hold piece of data into distributed caches to improve application's performance, scalability and reliability? If this is your current facing scenario, maybe you should look closely in the solutions provided by Oracle WebLogic Server. Oracle had integrated WebLogic Server and its champion data caching technology called Oracle Coherence. This seamless integration between this two products provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency, since Oracle provides an DI ("Dependency Injection") mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache. In this article, I will show you how to configure ActiveCache in WebLogic and at your Java EE application. Configuring WebLogic to manage Coherence Before you start changing your application to use Coherence, you need to configure your Coherence distributed cache. The good news is, you can manage all this stuff without writing a single line of code of XML or even Java. This configuration can be done entirely in the WebLogic administration console. The first thing to do is the setup of a Coherence cluster. A Coherence cluster is a set of Coherence JVMs configured to form one single view of the cache. This means that you can insert or remove members of the cluster without the client application (the application that generates or consume data from the cache) knows about the changes. This concept allows your solution to scale-out without changing the application server JVMs. You can growth your application only in the data grid layer. To start the configuration, you need to configure an machine that points to the server in which you want to execute the Coherence JVMs. WebLogic Server allows you to do this very easily using the Administration Console. In this example, I will call the machine as "coherence-server". Remember that in order to the machine concept works, you need to ensure that the NodeManager are being executed in the target server that the machine points to. The NodeManager executable can be found in <WLS_HOME>/server/bin/startNodeManager.sh. The next thing to do is to configure a Coherence cluster. In the WebLogic administration console, go to Environment > Coherence Clusters and click in "New". Call this Coherence cluster of "my-coherence-cluster". Click in next. Specify a valid cluster address and port. The Coherence members will communicate with each other through this address and port. Our Coherence cluster are now configured. Now it is time to configure the Coherence members and add them to this cluster. In the WebLogic administration console, go to Environment > Coherence Servers and click in "New". In the field "Name" set to "coh-server-1". In the field "Machine", associate this Coherence server to the machine "coherence-server". In the field "Cluster", associate this Coherence server to the cluster named "my-coherence-cluster". Click in "Finish". Start the Coherence server using the "Control" tab of WebLogic administration console. This will instruct WebLogic to start a new JVM of Coherence in the target machine that should join the pre-defined Coherence cluster. Configuring your Java EE Application to Access Coherence Now lets pass to the funny part of the configuration. The first thing to do is to inform your Java EE application which Coherence cluster to join. Oracle had updated WebLogic server deployment descriptors so you will not have to change your code or the containers deployment descriptors like application.xml, ejb-jar.xml or web.xml. In this example, I will show you how to enable DI ("Dependency Injection") to a Coherence cache from a Servlet 3.0 component. In the WEB-INF/weblogic.xml deployment descriptor, put the following metadata information: <?xml version="1.0" encoding="UTF-8"?> <wls:weblogic-web-app xmlns:wls="http://xmlns.oracle.com/weblogic/weblogic-web-app" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://java.sun.com/xml/ns/javaee http://java.sun.com/xml/ns/javaee/web-app_2_5.xsd http://xmlns.oracle.com/weblogic/weblogic-web-app http://xmlns.oracle.com/weblogic/weblogic-web-app/1.4/weblogic-web-app.xsd"> <wls:context-root>myWebApp</wls:context-root> <wls:coherence-cluster-ref> <wls:coherence-cluster-name>my-coherence-cluster</wls:coherence-cluster-name> </wls:coherence-cluster-ref> </wls:weblogic-web-app> As you can see, using the "coherence-cluster-name" tag, we are informing our Java EE application that it should join the "my-coherence-cluster" when it loads in the web container. Without this information, the application will not be able to access the predefined Coherence cluster. It will form its own Coherence cluster without any members. So never forget to put this information. Now put the coherence.jar and active-cache-1.0.jar dependencies at your WEB-INF/lib application classpath. You need to deploy this dependencies so ActiveCache can automatically take care of the Coherence cluster join phase. This dependencies can be found in the following locations: - <WLS_HOME>/common/deployable-libraries/active-cache-1.0.jar - <COHERENCE_HOME>/lib/coherence.jar Finally, you need to write down the access code to the Coherence cache at your Servlet. In the following example, we have a Servlet 3.0 component that access a Coherence cache named "transactions" and prints into the browser output the content (the ammount property) of one specific transaction. package com.oracle.coherence.demo.activecache; import java.io.IOException; import javax.annotation.Resource; import javax.servlet.ServletException; import javax.servlet.annotation.WebServlet; import javax.servlet.http.HttpServlet; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import com.tangosol.net.NamedCache; @WebServlet("/demo/specificTransaction") public class TransactionServletExample extends HttpServlet { @Resource(mappedName = "transactions") NamedCache transactions; protected void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { int transId = Integer.parseInt(request.getParameter("transId")); Transaction transaction = (Transaction) transactions.get(transId); response.getWriter().println("<center>" + transaction.getAmmount() + "</center>"); } } Thats it! No more configuration is necessary and you have all set to start producing and getting data to/from Coherence. As you can see in the example code, the Coherence cache are treated as a normal dependency in the Java EE container. The magic happens behind the scenes when the ActiveCache allows your application to join the defined Coherence cluster. The most interesting thing about this approach is, no matter which type of Coherence cache your are using (Distributed, Partitioned, Replicated, WAN-Remote) for the client application, it is just a simple attribute member of com.tangosol.net.NamedCache type. And its all managed by the Java EE container as an dependency. This means that if you inject the same dependency (the Coherence cache named "transactions") in another Java EE component (JSF managed-bean, Stateless EJB) the cache will be the same. Cool isn't it? Thanks to the CDI technology, we can extend the same support for non-Java EE standards components like simple POJOs. This means that you are not forced to only use Servlets, EJBs or JSF in order to inject Coherence caches. You can do the same approach for regular POJOs created for you and managed by lightweight containers like Spring or Seam.

    Read the article

  • WebSocket Applications using Java: JSR 356 Early Draft Now Available (TOTD #183)

    - by arungupta
    WebSocket provide a full-duplex and bi-directional communication protocol over a single TCP connection. JSR 356 is defining a standard API for creating WebSocket applications in the Java EE 7 Platform. This Tip Of The Day (TOTD) will provide an introduction to WebSocket and how the JSR is evolving to support the programming model. First, a little primer on WebSocket! WebSocket is a combination of IETF RFC 6455 Protocol and W3C JavaScript API (still a Candidate Recommendation). The protocol defines an opening handshake and data transfer. The API enables Web pages to use the WebSocket protocol for two-way communication with the remote host. Unlike HTTP, there is no need to create a new TCP connection and send a chock-full of headers for every message exchange between client and server. The WebSocket protocol defines basic message framing, layered over TCP. Once the initial handshake happens using HTTP Upgrade, the client and server can send messages to each other, independent from the other. There are no pre-defined message exchange patterns of request/response or one-way between client and and server. These need to be explicitly defined over the basic protocol. The communication between client and server is pretty symmetric but there are two differences: A client initiates a connection to a server that is listening for a WebSocket request. A client connects to one server using a URI. A server may listen to requests from multiple clients on the same URI. Other than these two difference, the client and server behave symmetrically after the opening handshake. In that sense, they are considered as "peers". After a successful handshake, clients and servers transfer data back and forth in conceptual units referred as "messages". On the wire, a message is composed of one or more frames. Application frames carry payload intended for the application and can be text or binary data. Control frames carry data intended for protocol-level signaling. Now lets talk about the JSR! The Java API for WebSocket is worked upon as JSR 356 in the Java Community Process. This will define a standard API for building WebSocket applications. This JSR will provide support for: Creating WebSocket Java components to handle bi-directional WebSocket conversations Initiating and intercepting WebSocket events Creation and consumption of WebSocket text and binary messages The ability to define WebSocket protocols and content models for an application Configuration and management of WebSocket sessions, like timeouts, retries, cookies, connection pooling Specification of how WebSocket application will work within the Java EE security model Tyrus is the Reference Implementation for JSR 356 and is already integrated in GlassFish 4.0 Promoted Builds. And finally some code! The API allows to create WebSocket endpoints using annotations and interface. This TOTD will show a simple sample using annotations. A subsequent blog will show more advanced samples. A POJO can be converted to a WebSocket endpoint by specifying @WebSocketEndpoint and @WebSocketMessage. @WebSocketEndpoint(path="/hello")public class HelloBean {     @WebSocketMessage    public String sayHello(String name) {         return "Hello " + name + "!";     }} @WebSocketEndpoint marks this class as a WebSocket endpoint listening at URI defined by the path attribute. The @WebSocketMessage identifies the method that will receive the incoming WebSocket message. This first method parameter is injected with payload of the incoming message. In this case it is assumed that the payload is text-based. It can also be of the type byte[] in case the payload is binary. A custom object may be specified if decoders attribute is specified in the @WebSocketEndpoint. This attribute will provide a list of classes that define how a custom object can be decoded. This method can also take an optional Session parameter. This is injected by the runtime and capture a conversation between two endpoints. The return type of the method can be String, byte[] or a custom object. The encoders attribute on @WebSocketEndpoint need to define how a custom object can be encoded. The client side is an index.jsp with embedded JavaScript. The JSP body looks like: <div style="text-align: center;"> <form action="">     <input onclick="say_hello()" value="Say Hello" type="button">         <input id="nameField" name="name" value="WebSocket" type="text"><br>    </form> </div> <div id="output"></div> The code is relatively straight forward. It has an HTML form with a button that invokes say_hello() method and a text field named nameField. A div placeholder is available for displaying the output. Now, lets take a look at some JavaScript code: <script language="javascript" type="text/javascript"> var wsUri = "ws://localhost:8080/HelloWebSocket/hello";     var websocket = new WebSocket(wsUri);     websocket.onopen = function(evt) { onOpen(evt) };     websocket.onmessage = function(evt) { onMessage(evt) };     websocket.onerror = function(evt) { onError(evt) };     function init() {         output = document.getElementById("output");     }     function say_hello() {      websocket.send(nameField.value);         writeToScreen("SENT: " + nameField.value);     } This application is deployed as "HelloWebSocket.war" (download here) on GlassFish 4.0 promoted build 57. So the WebSocket endpoint is listening at "ws://localhost:8080/HelloWebSocket/hello". A new WebSocket connection is initiated by specifying the URI to connect to. The JavaScript API defines callback methods that are invoked when the connection is opened (onOpen), closed (onClose), error received (onError), or a message from the endpoint is received (onMessage). The client API has several send methods that transmit data over the connection. This particular script sends text data in the say_hello method using nameField's value from the HTML shown earlier. Each click on the button sends the textbox content to the endpoint over a WebSocket connection and receives a response based upon implementation in the sayHello method shown above. How to test this out ? Download the entire source project here or just the WAR file. Download GlassFish4.0 build 57 or later and unzip. Start GlassFish as "asadmin start-domain". Deploy the WAR file as "asadmin deploy HelloWebSocket.war". Access the application at http://localhost:8080/HelloWebSocket/index.jsp. After clicking on "Say Hello" button, the output would look like: Here are some references for you: WebSocket - Protocol and JavaScript API JSR 356: Java API for WebSocket - Specification (Early Draft) and Implementation (already integrated in GlassFish 4 promoted builds) Subsequent blogs will discuss the following topics (not necessary in that order) ... Binary data as payload Custom payloads using encoder/decoder Error handling Interface-driven WebSocket endpoint Java client API Client and Server configuration Security Subprotocols Extensions Other topics from the API Capturing WebSocket on-the-wire messages

    Read the article

  • JSP Precompilation for ADF Applications

    - by Duncan Mills
    A question that comes up from time to time, particularly in relation to build automation, is how to best pre-compile the .jspx and .jsff files in an ADF application. Thus ensuring that the app is ready to run as soon as it's installed into WebLogic. In the normal run of things, the first poor soul to hit a page pays the price and has to wait a little whilst the JSP is compiled into a servlet. Everyone else subsequently gets a free lunch. So it's a reasonable thing to want to do... Let Me List the Ways So forth to Google (other search engines are available)... which lead me to a fairly old article on WLDJ - Removing Performance Bottlenecks Through JSP Precompilation. Technololgy wise, it's somewhat out of date, but the one good point that it made is that it's really not very useful to try and use the precompile option in the weblogic.xml file. That's a really good observation - particularly if you're trying to integrate a pre-compile step into a Hudson Continuous Integration process. That same article mentioned an alternative approach for programmatic pre-compilation using weblogic.jspc. This seemed like a much more useful approach for a CI environment. However, weblogic.jspc is now obsoleted by weblogic.appc so we'll use that instead.  Thanks to Steve for the pointer there. And So To APPC APPC has documentation - always a great place to start, and supports usage both from Ant via the wlappc task and from the command line using the weblogic.appc command. In my testing I took the latter approach. Usage, as the documentation will show you, is superficially pretty simple.  The nice thing here, is that you can pass an existing EAR file (generated of course using OJDeploy) and that EAR will be updated in place with the freshly compiled servlet classes created from the JSPs. Appc takes care of all the unpacking, compiling and re-packing of the EAR for you. Neat.  So we're done right...? Not quite. The Devil is in the Detail  OK so I'm being overly dramatic but it's not all plain sailing, so here's a short guide to using weblogic.appc to compile a simple ADF application without pain.  Information You'll Need The following is based on the assumption that you have a stand-alone WLS install with the Application Development  Runtime installed and a suitable ADF enabled domain created. This could of course all be run off of a JDeveloper install as well 1. Your Weblogic home directory. Everything you need is relative to this so make a note.  In my case it's c:\builds\wls_ps4. 2. Next deploy your EAR as normal and have a peek inside it using your favourite zip management tool. First of all look at the weblogic-application.xml inside the EAR /META-INF directory. Have a look for any library references. Something like this: <library-ref>    <library-name>adf.oracle.domain</library-name> </library-ref>   Make a note of the library ref (adf.oracle.domain in this case) , you'll need that in a second. 3. Next open the nested WAR file within the EAR and then have a peek inside the weblogic.xml file in the /WEB-INF directory. Again  make a note of the library references. 4. Now start the WebLogic as per normal and run the WebLogic console app (e.g. http://localhost:7001/console). In the Domain Structure navigator, select Deployments. 5. For each of the libraries you noted down drill into the library definition and make a note of the .war, .ear or .jar that defines the library. For example, in my case adf.oracle.domain maps to "C:\ builds\ WLS_PS4\ oracle_common\ modules\ oracle. adf. model_11. 1. 1\ adf. oracle. domain. ear". Note the extra spaces that are salted throughout this string as it is displayed in the console - just to make it annoying, you'll have to strip these out. 6. Finally you'll need the location of the adfsharebean.jar. We need to pass this on the classpath for APPC so that the ADFConfigLifeCycleCallBack listener can be found. In a more complex app of your own you may need additional classpath entries as well.  Now we're ready to go, and it's a simple matter of applying the information we have gathered into the relevant command line arguments for the utility A Simple CMD File to Run APPC  Here's the stub .cmd file I'm using on Windows to run this. @echo offREM Stub weblogic.appc Runner setlocal set WLS_HOME=C:\builds\WLS_PS4 set ADF_LIB_ROOT=%WLS_HOME%\oracle_common\modulesset COMMON_LIB_ROOT=%WLS_HOME%\wlserver_10.3\common\deployable-libraries set ADF_WEBAPP=%ADF_LIB_ROOT%\oracle.adf.view_11.1.1\adf.oracle.domain.webapp.war set ADF_DOMAIN=%ADF_LIB_ROOT%\oracle.adf.model_11.1.1\adf.oracle.domain.ear set JSTL=%COMMON_LIB_ROOT%\jstl-1.2.war set JSF=%COMMON_LIB_ROOT%\jsf-1.2.war set ADF_SHARE=%ADF_LIB_ROOT%\oracle.adf.share_11.1.1\adfsharembean.jar REM Set up the WebLogic Environment so appc can be found call %WLS_HOME%\wlserver_10.3\server\bin\setWLSEnv.cmd CLS REM Now compile away!java weblogic.appc -verbose -library %ADF_WEBAPP%,%ADF_DOMAIN%,%JSTL%,%JSF% -classpath %ADF_SHARE% %1 endlocal Running the above on a target ADF .ear  file will zip through and create all of the relevant compiled classes inside your nested .war file in the \WEB-INF\classes\jsp_servlet\ directory (but don't take my word for it, run it and take a look!) And So... In the immortal words of  the Pet Shop Boys, Was It Worth It? Well, here's where you'll have to do your own testing. In  my case here, with a simple ADF application, pre-compilation shaved an non-scientific "3 Elephants" off of the initial page load time for the first access of each page. That's a pretty significant payback for such a simple step to add into your CI process, so why not give it a go.

    Read the article

  • Booting from integrated RAID controller when another RAID controller is installed in a PCIe slot

    - by Antony Scott
    I have a GA MA785GT UD3H motherboard with Windows Server 2008 R2 installed on a RAID1 using the on-board RAID controller. I have now installed a RocketRaid 2680 controller and set up a RAID5 for all my data to be stored on. Unfortunately I now cannot boot from the RAID1 anymore, the PC is trying to boot from the RAID5! Does anyone have any experience of this motherboard / RAID controller combination?

    Read the article

  • Debian keyring error: "No keyring installed"

    - by donatello
    I have a Debian Squeeze EC2 AMI. On booting up an instance with it and trying to install packages with apt-get I get errors saying there is no keyring installed. Here is the error with apt-get update: root@ip:~# apt-get update Get:1 http://ftp.us.debian.org squeeze Release.gpg [1672 B] Ign http://ftp.us.debian.org/debian/ squeeze/contrib Translation-en Ign http://ftp.us.debian.org/debian/ squeeze/main Translation-en Ign http://ftp.us.debian.org/debian/ squeeze/non-free Translation-en Get:2 http://security.debian.org squeeze/updates Release.gpg [836 B] Ign http://security.debian.org/ squeeze/updates/contrib Translation-en Ign http://security.debian.org/ squeeze/updates/main Translation-en Hit http://ftp.us.debian.org squeeze Release Ign http://ftp.us.debian.org squeeze Release Ign http://security.debian.org/ squeeze/updates/non-free Translation-en Ign http://ftp.us.debian.org squeeze/main Sources/DiffIndex Get:3 http://security.debian.org squeeze/updates Release [86.9 kB] Ign http://security.debian.org squeeze/updates Release Ign http://ftp.us.debian.org squeeze/contrib Sources/DiffIndex Ign http://ftp.us.debian.org squeeze/non-free Sources/DiffIndex Ign http://ftp.us.debian.org squeeze/main amd64 Packages/DiffIndex Ign http://ftp.us.debian.org squeeze/contrib amd64 Packages/DiffIndex Ign http://ftp.us.debian.org squeeze/non-free amd64 Packages/DiffIndex Ign http://security.debian.org squeeze/updates/main Sources/DiffIndex Hit http://ftp.us.debian.org squeeze/main Sources Hit http://ftp.us.debian.org squeeze/contrib Sources Hit http://ftp.us.debian.org squeeze/non-free Sources Hit http://ftp.us.debian.org squeeze/main amd64 Packages Hit http://ftp.us.debian.org squeeze/contrib amd64 Packages Ign http://security.debian.org squeeze/updates/contrib Sources/DiffIndex Ign http://security.debian.org squeeze/updates/non-free Sources/DiffIndex Ign http://security.debian.org squeeze/updates/main amd64 Packages/DiffIndex Ign http://security.debian.org squeeze/updates/contrib amd64 Packages/DiffIndex Ign http://security.debian.org squeeze/updates/non-free amd64 Packages/DiffIndex Hit http://ftp.us.debian.org squeeze/non-free amd64 Packages Get:4 http://backports.debian.org squeeze-backports Release.gpg [836 B] Ign http://backports.debian.org/debian-backports/ squeeze-backports/main Translation-en Hit http://security.debian.org squeeze/updates/main Sources Hit http://security.debian.org squeeze/updates/contrib Sources Hit http://security.debian.org squeeze/updates/non-free Sources Hit http://security.debian.org squeeze/updates/main amd64 Packages Hit http://security.debian.org squeeze/updates/contrib amd64 Packages Hit http://security.debian.org squeeze/updates/non-free amd64 Packages Get:5 http://backports.debian.org squeeze-backports Release [77.6 kB] Ign http://backports.debian.org squeeze-backports Release Hit http://backports.debian.org squeeze-backports/main amd64 Packages/DiffIndex Hit http://backports.debian.org squeeze-backports/main amd64 Packages Fetched 3346 B in 0s (5298 B/s) Reading package lists... Done W: GPG error: http://ftp.us.debian.org squeeze Release: No keyring installed in /etc/apt/trusted.gpg.d/. W: GPG error: http://security.debian.org squeeze/updates Release: No keyring installed in /etc/apt/trusted.gpg.d/. W: GPG error: http://backports.debian.org squeeze-backports Release: No keyring installed in /etc/apt/trusted.gpg.d/. Googling around didn't really help me fix this problem. I tried installing the packages "debian-keyring" and "debian-archive-keyring" but the error does not go away. I'd like to avoid installing unstrusted packages. Any help is appreciated! Why does this error happen and where can I learn more?

    Read the article

  • No network connectivity for my CentOS 6 installed in VMWare Fusion in MacOSX Lion

    - by gilzero
    I installed CentOS 6.2 (CentOS-6.2-x86_64-minimal.iso) with VMWare Fusion(Version 4.1.2). (Not with EasyInstall). I am using MacOSX Lion, connect to internet via WIFI. The centos installed does not have network connectivity. *How may I configure it to connect to wifi? * Thanks for any help. Below is screenshot of ifconfig: With the setting Network Adapter, I tried both NAT and Bridged WiFi.

    Read the article

< Previous Page | 30 31 32 33 34 35 36 37 38 39 40 41  | Next Page >