Search Results

Search found 11130 results on 446 pages for 'solaris 11'.

Page 374/446 | < Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >

  • How to access files on a drive from an older system, mounted in a new system?

    - by David Thomas
    I've recently built a new system, after a rather large physical injury was sustained by my previous system (a precarious balance, and gravity, were not a happy mix). Surprisingly the /home drive of that system appears to have more-or-less survived the trauma. However... I decided to use a fresh drive for / (and swap) partition(s), and another fresh drive for the new /home. Now that's working, I decided to install the old /home drive (that I had assumed until now would be entirely dead and without capacity for use) into the new system to recover the files and data (so far as is possible). At this point I've run into a snag: I have no idea how to go about this (with Windows it was relatively easy, the new drive would be the latest character of the alphabet, and go from there). With 'disk utility' (System - Administration - Disk Utitlity) I've worked out which drive it is (/dev/sda) but clicking on 'mount' produces an error: 1: helper failed with: mount: according to mtab, /dev/sdb1 is already mounted on / mount failed ...if it is mounted on / I can't see it. I'm also moderately confused by the disk (device /dev/sda) being referred to as /dev/sdb1. Any and all insights would be incredibly welcome (I've already voted for: Idea #9063: New internal hard drives default automount at Brainstorm). Edited in response to Roland's request for a screenshot of disk utility: Details (so far as I know them): 40GB disk is / and swap, 1.0 TB Samsung is /home 1.0 TB Hitachi is from the old system (and was the old /home drive). Output from sudo fdisk -l pasted below: Disk /dev/sda: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000bef00 Device Boot Start End Blocks Id System /dev/sda1 1 121601 976760001 83 Linux Disk /dev/sdb: 40.0 GB, 40018599936 bytes 255 heads, 63 sectors/track, 4865 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00037652 Device Boot Start End Blocks Id System /dev/sdb1 * 1 4742 38084608 83 Linux /dev/sdb2 4742 4866 993281 5 Extended /dev/sdb5 4742 4866 993280 82 Linux swap / Solaris Disk /dev/sdc: 1000.2 GB, 1000204886016 bytes 255 heads, 63 sectors/track, 121601 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x000e8d46 Device Boot Start End Blocks Id System /dev/sdc1 1 121602 976760832 83 Linux

    Read the article

  • XBRL - Moving from Production to Consumption

    - by jmorourke
    Here's an update on what’s new with XBRL and how it can actually benefit your organization versus adding extra time and costs to financial reporting.  On February 29th (leap day) of 2012 I attended the XBRL and Financial Analysis Technology Conference at Baruch College in NYC.  The event, which attracted over 300 XBRL gurus and fans was presented by XBRL US, The New York Society of Security Analysts’ Improved Corporate Reporting Committee, and Baruch College’s Robert Zicklin Center for Corporate Integrity.  The event featured keynotes from the U.S. Securities and Exchange Commission (SEC), and the CFA Institute as well as panels covering alternative research tools and data, corporate reporting to stakeholders and a demonstration of XBRL analysis tools.  The program culminated in a presentation of the finalists and the winner of the $20,000 XBRL Challenge.    Some of the key points made in the sessions included: The focus of XBRL tools is moving from production to consumption. As of February 2012, over 9000 companies are reporting in XBRL, with over 10 million facts filed to date XBRL taxonomy extensions have dropped from 27% to 11% making comparisons easier The SEC reports that XBRL makes it easier to analyze disclosures, focus on accounting issues XBRL is helping standards-setters like the FASB speed their analysis of impacts of proposed accounting rule changes Companies like Thomson Reuters report that XBRL is helping speed the delivery of data to clients The most interesting part of the program though, was the session highlighting the 5 finalists in the XBRL Challenge competition and the winning solution.  The XBRL Challenge was launched in 2011 as a means of spurring the development of more end-user tools to help with the consumption of XBRL-based financial information.       Over an 8-month process handled by 5 judges, there were 84 registrants, 15 completed submissions, 5 finalists and one winner of the challenge.  All of the solutions are open-sourced tools and most of them focus on consuming XBRL-based data.  The 5 finalists included: Advanced XBRL Processing from Oxide solutions – XBRL viewer for taxonomies, filings and company data with peer comparison capabilities. Arrelle – API for XBRL processes, supports SEC Validations, RSS Feeds to access filings etc. Calcbench – XBRL data analysis tool that can be embedded in other web applications.  This tool can combine XBRL filings with real-time market data. XBRL to XL – allows the importing of XBRL data into Microsoft Excel for analysis, comparisons.  Users start on the web and populate Excel with XBRL data. XBurble – allows users to search and view XBRL filings, export to Excel, merge for comparison, and includes a workflow interface. The winner of the $20,000 XBRL Challenge prize was CalcBench.  More information about the XBRL Challenge and the finalists can be found at www.XBRLUS.org/challenge XBRL for Sustainability Reporting – other recent news on the XBRL front was the announcement by the Global Reporting Initiative (GRI) of an XBRL taxonomy for Sustainability Reporting.  This taxonomy was co-developed by the GRI and Deloitte and is designed to make the consumption of data found in Sustainability Reports much easier.  Although there is no government mandate to file Sustainability Reports in XBRL format, organizations that do use the GRI guidelines for Sustainability Reporting are encouraged to tag and submit their data voluntarily to the GRI – who will populate a database with Sustainability Reporting data and make this available to the public.  For more information about this initiative, you can go to the GRI web site:  www.globalreporting.org. So how does all of this benefit corporate filers and investors?  Since its introduction, the consensus in the market is that XBRL has mainly benefited the regulators and investment analysts who need to consume and analyze large volumes of financial data.  But with the emergence of more end-user tools for consuming and analyzing XBRL-based data, and the ability to perform quick comparisons of one company versus its peers and competitors in an industry group, will soon accelerate the benefits to corporate finance staff, as well as individual investors.  This could apply to financial results tagged in XBRL, as well as non-financial information such as Sustainability Reporting – which over the long-term will likely be integrated with financial reporting.   And as multiple regulators and agencies in a country adopt the XBRL standard for corporate filings, more benefits will accrue as companies will be able to leverage one set of XBRL-based financial data for multiple regulatory filings.     For more information about the latest developments in XBRL, check out the XBRL US or XBRL International web sites:  www.xbrl.org, www.xbrlus.org. For more information about what Oracle is doing to support XBRL, here are some links: http://www.oracle.com/us/solutions/ent-performance-bi/disclosure-management-065892.html http://www.oracle.com/technetwork/database/features/xmldb/index-087631.html Feel free to contact me if you have any questions or need more information:  [email protected]

    Read the article

  • How-to populate different select list content per table row

    - by frank.nimphius
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A frequent requirement posted on the OTN forum is to render cells of a table column using instances of af:selectOneChoices with each af:selectOneChoice instance showing different list values. To implement this use case, the select list of the table column is populated dynamically from a managed bean for each row. The table's current rendered row object is accessible in the managed bean using the #{row} expression, where "row" is the value added to the table's var property. <af:table var="row">   ...   <af:column ...>     <af:selectOneChoice ...>         <f:selectItems value="#{browseBean.items}"/>     </af:selectOneChoice>   </af:column </af:table> The browseBean managed bean referenced in the code snippet above has a setItems and getItems method defined that is accessible from EL using the #{browseBean.items} expression. When the table renders, then the var property variable - the #{row} reference - is filled with the data object displayed in the current rendered table row. The managed bean getItems method returns a List<SelectItem>, which is the model format expected by the f:selectItems tag to populate the af:selectOneChoice list. public void setItems(ArrayList<SelectItem> items) {} //this method is executed for each table row public ArrayList<SelectItem> getItems() {   FacesContext fctx = FacesContext.getCurrentInstance();   ELContext elctx = fctx.getELContext();   ExpressionFactory efactory =          fctx.getApplication().getExpressionFactory();          ValueExpression ve =          efactory.createValueExpression(elctx, "#{row}", Object.class);      Row rw = (Row) ve.getValue(elctx);         //use one of the row attributes to determine which list to query and   //show in the current af:selectOneChoice list  // ...  ArrayList<SelectItem> alsi = new ArrayList<SelectItem>();  for( ... ){      SelectItem item = new SelectItem();        item.setLabel(...);        item.setValue(...);        alsi.add(item);   }   return alsi;} For better performance, the ADF Faces table stamps it data rows. Stamping means that the cell renderer component - af:selectOneChoice in this example - is instantiated once for the column and then repeatedly used to display the cell data for individual table rows. This however means that you cannot refresh a single select one choice component in a table to change its list values. Instead the whole table needs to be refreshed, rerunning the managed bean list query. Be aware that having individual list values per table row is an expensive operation that should be used only on small tables for Business Services with low latency data fetching (e.g. ADF Business Components and EJB) and with server side caching strategies for the queried data (e.g. storing queried list data in a managed bean in session scope).

    Read the article

  • Maximize Performance and Availability with Oracle Data Integration

    - by Tanu Sood
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-fareast-font-family:Calibri; mso-bidi-font-family:"Times New Roman";} Alert: Oracle is hosting the 12c Launch Webcast for Oracle Data Integration and Oracle Golden Gate on Tuesday, November 12 (tomorrow) to discuss the new capabilities in detail and share customer perspectives. Hear directly from customer experts and executives from SolarWorld Industries America, British Telecom and Rittman Mead and get your questions answered live by product experts. Register for this complimentary webcast today and join in the discussion tomorrow. Author: Irem Radzik, Senior Principal Product Director, Oracle Organizations that want to use IT as a strategic point of differentiation prefer Oracle’s complete application offering to drive better business performance and optimize their IT investments. These enterprise applications are in the center of business operations and they contain critical data that needs to be accessed continuously, as well as analyzed and acted upon in a timely manner. These systems also need to operate with high-performance and availability, which means analytical functions should not degrade applications performance, and even system maintenance and upgrades should not interrupt availability. Oracle’s data integration products, Oracle Data Integrator, Oracle GoldenGate, and Oracle Enterprise Data Quality, provide the core foundation for bringing data from various business-critical systems to gain a broader, unified view. As a more advance offering to 3rd party products, Oracle’s data integration products facilitate real-time reporting for Oracle Applications without impacting application performance, and provide ability to upgrade and maintain the system without taking downtime. Oracle GoldenGate is certified for Oracle Applications, including E-Business Suite, Siebel CRM, PeopleSoft, and JD Edwards, for moving transactional data in real-time to a dedicated operational reporting environment. This solution allows the app users to offload the resource-heavy queries to the reporting instance(s), reducing CPU utilization, improving OLTP performance, and extending the lifetime of existing IT assets. In addition, having a dedicated reporting instance with up-to-the-second transactional data allows optimizing the reporting environment and even decreasing costs as GoldenGate can move only the required data from expensive mainframe environments to cost-efficient open system platforms.  With real-time data replication capabilities GoldenGate is also certified to enable application upgrades and database/hardware/OS migration without impacting business operations. GoldenGate is certified for Siebel CRM, Communications Billing and Revenue Management and JD Edwards for supporting zero downtime upgrades to the latest app version. GoldenGate synchronizes a parallel, upgraded system with the old version in real time, thus enables continuous operations during the process. Oracle GoldenGate is also certified for minimal downtime database migrations for Oracle E-Business Suite and other key applications. GoldenGate’s solution also minimizes the risk by offering a failback option after the switchover to the new environment. Furthermore, Oracle GoldenGate’s bidirectional active-active data replication is certified for Oracle ATG Web Commerce to enable geographically load balancing and high availability for ATG customers. For enabling better business insight, Oracle Data Integration products power Oracle BI Applications with high performance bulk and real-time data integration. Oracle Data Integrator (ODI) is embedded in Oracle BI Applications version 11.1.1.7.1 and helps to integrate data end-to-end across the full BI Applications architecture, supporting capabilities such as data-lineage, which helps business users identify report-to-source capabilities. ODI is integrated with Oracle GoldenGate and provides Oracle BI Applications customers the option to use real-time transactional data in analytics, and do so non-intrusively. By using Oracle GoldenGate with the latest release of Oracle BI Applications, organizations not only leverage fresh data in analytics, but also eliminate the need for an ETL batch window and minimize the impact on OLTP systems. You can learn more about Oracle Data Integration products latest 12c version in our upcoming launch webcast and access the app-specific free resources in the new Data Integration for Oracle Applications Resource Center.

    Read the article

  • Rkhunter 122 suspect files; do I have a problem?

    - by user276166
    I am new to ubuntu. I am using Xfce Ubuntu 14.04 LTS. I have ran rkhunter a few weeks age and only got a few warnings. The forum said that they were normal. But, this time rkhunter reported 122 warnings. Please advise. casey@Shaman:~$ sudo rkhunter -c [ Rootkit Hunter version 1.4.0 ] Checking system commands... Performing 'strings' command checks Checking 'strings' command [ OK ] Performing 'shared libraries' checks Checking for preloading variables [ None found ] Checking for preloaded libraries [ None found ] Checking LD_LIBRARY_PATH variable [ Not found ] Performing file properties checks Checking for prerequisites [ Warning ] /usr/sbin/adduser [ Warning ] /usr/sbin/chroot [ Warning ] /usr/sbin/cron [ OK ] /usr/sbin/groupadd [ Warning ] /usr/sbin/groupdel [ Warning ] /usr/sbin/groupmod [ Warning ] /usr/sbin/grpck [ Warning ] /usr/sbin/nologin [ Warning ] /usr/sbin/pwck [ Warning ] /usr/sbin/rsyslogd [ Warning ] /usr/sbin/useradd [ Warning ] /usr/sbin/userdel [ Warning ] /usr/sbin/usermod [ Warning ] /usr/sbin/vipw [ Warning ] /usr/bin/awk [ Warning ] /usr/bin/basename [ Warning ] /usr/bin/chattr [ Warning ] /usr/bin/cut [ Warning ] /usr/bin/diff [ Warning ] /usr/bin/dirname [ Warning ] /usr/bin/dpkg [ Warning ] /usr/bin/dpkg-query [ Warning ] /usr/bin/du [ Warning ] /usr/bin/env [ Warning ] /usr/bin/file [ Warning ] /usr/bin/find [ Warning ] /usr/bin/GET [ Warning ] /usr/bin/groups [ Warning ] /usr/bin/head [ Warning ] /usr/bin/id [ Warning ] /usr/bin/killall [ OK ] /usr/bin/last [ Warning ] /usr/bin/lastlog [ Warning ] /usr/bin/ldd [ Warning ] /usr/bin/less [ OK ] /usr/bin/locate [ OK ] /usr/bin/logger [ Warning ] /usr/bin/lsattr [ Warning ] /usr/bin/lsof [ OK ] /usr/bin/mail [ OK ] /usr/bin/md5sum [ Warning ] /usr/bin/mlocate [ OK ] /usr/bin/newgrp [ Warning ] /usr/bin/passwd [ Warning ] /usr/bin/perl [ Warning ] /usr/bin/pgrep [ Warning ] /usr/bin/pkill [ Warning ] /usr/bin/pstree [ OK ] /usr/bin/rkhunter [ OK ] /usr/bin/rpm [ Warning ] /usr/bin/runcon [ Warning ] /usr/bin/sha1sum [ Warning ] /usr/bin/sha224sum [ Warning ] /usr/bin/sha256sum [ Warning ] /usr/bin/sha384sum [ Warning ] /usr/bin/sha512sum [ Warning ] /usr/bin/size [ Warning ] /usr/bin/sort [ Warning ] /usr/bin/stat [ Warning ] /usr/bin/strace [ Warning ] /usr/bin/strings [ Warning ] /usr/bin/sudo [ Warning ] /usr/bin/tail [ Warning ] /usr/bin/test [ Warning ] /usr/bin/top [ Warning ] /usr/bin/touch [ Warning ] /usr/bin/tr [ Warning ] /usr/bin/uniq [ Warning ] /usr/bin/users [ Warning ] /usr/bin/vmstat [ Warning ] /usr/bin/w [ Warning ] /usr/bin/watch [ Warning ] /usr/bin/wc [ Warning ] /usr/bin/wget [ Warning ] /usr/bin/whatis [ Warning ] /usr/bin/whereis [ Warning ] /usr/bin/which [ OK ] /usr/bin/who [ Warning ] /usr/bin/whoami [ Warning ] /usr/bin/unhide.rb [ Warning ] /usr/bin/mawk [ Warning ] /usr/bin/lwp-request [ Warning ] /usr/bin/heirloom-mailx [ OK ] /usr/bin/w.procps [ Warning ] /sbin/depmod [ Warning ] /sbin/fsck [ Warning ] /sbin/ifconfig [ Warning ] /sbin/ifdown [ Warning ] /sbin/ifup [ Warning ] /sbin/init [ Warning ] /sbin/insmod [ Warning ] /sbin/ip [ Warning ] /sbin/lsmod [ Warning ] /sbin/modinfo [ Warning ] /sbin/modprobe [ Warning ] /sbin/rmmod [ Warning ] /sbin/route [ Warning ] /sbin/runlevel [ Warning ] /sbin/sulogin [ Warning ] /sbin/sysctl [ Warning ] /bin/bash [ Warning ] /bin/cat [ Warning ] /bin/chmod [ Warning ] /bin/chown [ Warning ] /bin/cp [ Warning ] /bin/date [ Warning ] /bin/df [ Warning ] /bin/dmesg [ Warning ] /bin/echo [ Warning ] /bin/ed [ OK ] /bin/egrep [ Warning ] /bin/fgrep [ Warning ] /bin/fuser [ OK ] /bin/grep [ Warning ] /bin/ip [ Warning ] /bin/kill [ Warning ] /bin/less [ OK ] /bin/login [ Warning ] /bin/ls [ Warning ] /bin/lsmod [ Warning ] /bin/mktemp [ Warning ] /bin/more [ Warning ] /bin/mount [ Warning ] /bin/mv [ Warning ] /bin/netstat [ Warning ] /bin/ping [ Warning ] /bin/ps [ Warning ] /bin/pwd [ Warning ] /bin/readlink [ Warning ] /bin/sed [ Warning ] /bin/sh [ Warning ] /bin/su [ Warning ] /bin/touch [ Warning ] /bin/uname [ Warning ] /bin/which [ OK ] /bin/kmod [ Warning ] /bin/dash [ Warning ] [Press <ENTER> to continue] Checking for rootkits... Performing check of known rootkit files and directories 55808 Trojan - Variant A [ Not found ] ADM Worm [ Not found ] AjaKit Rootkit [ Not found ] Adore Rootkit [ Not found ] aPa Kit [ Not found ] Apache Worm [ Not found ] Ambient (ark) Rootkit [ Not found ] Balaur Rootkit [ Not found ] BeastKit Rootkit [ Not found ] beX2 Rootkit [ Not found ] BOBKit Rootkit [ Not found ] cb Rootkit [ Not found ] CiNIK Worm (Slapper.B variant) [ Not found ] Danny-Boy's Abuse Kit [ Not found ] Devil RootKit [ Not found ] Dica-Kit Rootkit [ Not found ] Dreams Rootkit [ Not found ] Duarawkz Rootkit [ Not found ] Enye LKM [ Not found ] Flea Linux Rootkit [ Not found ] Fu Rootkit [ Not found ] Fuck`it Rootkit [ Not found ] GasKit Rootkit [ Not found ] Heroin LKM [ Not found ] HjC Kit [ Not found ] ignoKit Rootkit [ Not found ] IntoXonia-NG Rootkit [ Not found ] Irix Rootkit [ Not found ] Jynx Rootkit [ Not found ] KBeast Rootkit [ Not found ] Kitko Rootkit [ Not found ] Knark Rootkit [ Not found ] ld-linuxv.so Rootkit [ Not found ] Li0n Worm [ Not found ] Lockit / LJK2 Rootkit [ Not found ] Mood-NT Rootkit [ Not found ] MRK Rootkit [ Not found ] Ni0 Rootkit [ Not found ] Ohhara Rootkit [ Not found ] Optic Kit (Tux) Worm [ Not found ] Oz Rootkit [ Not found ] Phalanx Rootkit [ Not found ] Phalanx2 Rootkit [ Not found ] Phalanx2 Rootkit (extended tests) [ Not found ] Portacelo Rootkit [ Not found ] R3dstorm Toolkit [ Not found ] RH-Sharpe's Rootkit [ Not found ] RSHA's Rootkit [ Not found ] Scalper Worm [ Not found ] Sebek LKM [ Not found ] Shutdown Rootkit [ Not found ] SHV4 Rootkit [ Not found ] SHV5 Rootkit [ Not found ] Sin Rootkit [ Not found ] Slapper Worm [ Not found ] Sneakin Rootkit [ Not found ] 'Spanish' Rootkit [ Not found ] Suckit Rootkit [ Not found ] Superkit Rootkit [ Not found ] TBD (Telnet BackDoor) [ Not found ] TeLeKiT Rootkit [ Not found ] T0rn Rootkit [ Not found ] trNkit Rootkit [ Not found ] Trojanit Kit [ Not found ] Tuxtendo Rootkit [ Not found ] URK Rootkit [ Not found ] Vampire Rootkit [ Not found ] VcKit Rootkit [ Not found ] Volc Rootkit [ Not found ] Xzibit Rootkit [ Not found ] zaRwT.KiT Rootkit [ Not found ] ZK Rootkit [ Not found ] [Press <ENTER> to continue] Performing additional rootkit checks Suckit Rookit additional checks [ OK ] Checking for possible rootkit files and directories [ None found ] Checking for possible rootkit strings [ None found ] Performing malware checks Checking running processes for suspicious files [ None found ] Checking for login backdoors [ None found ] Checking for suspicious directories [ None found ] Checking for sniffer log files [ None found ] Performing Linux specific checks Checking loaded kernel modules [ OK ] Checking kernel module names [ OK ] [Press <ENTER> to continue] Checking the network... Performing checks on the network ports Checking for backdoor ports [ None found ] Checking for hidden ports [ Skipped ] Performing checks on the network interfaces Checking for promiscuous interfaces [ None found ] Checking the local host... Performing system boot checks Checking for local host name [ Found ] Checking for system startup files [ Found ] Checking system startup files for malware [ None found ] Performing group and account checks Checking for passwd file [ Found ] Checking for root equivalent (UID 0) accounts [ None found ] Checking for passwordless accounts [ None found ] Checking for passwd file changes [ Warning ] Checking for group file changes [ Warning ] Checking root account shell history files [ None found ] Performing system configuration file checks Checking for SSH configuration file [ Not found ] Checking for running syslog daemon [ Found ] Checking for syslog configuration file [ Found ] Checking if syslog remote logging is allowed [ Not allowed ] Performing filesystem checks Checking /dev for suspicious file types [ Warning ] Checking for hidden files and directories [ Warning ] [Press <ENTER> to continue] System checks summary ===================== File properties checks... Required commands check failed Files checked: 137 Suspect files: 122 Rootkit checks... Rootkits checked : 291 Possible rootkits: 0 Applications checks... All checks skipped The system checks took: 5 minutes and 11 seconds All results have been written to the log file (/var/log/rkhunter.log)

    Read the article

  • Consolidating Oracle E-Business Suite R12 on Oracle's SPARC SuperCluster

    - by Giri Mandalika
    An Optimized Solution for Oracle E-Business Suite (EBS) R12 12.1.3 is now available on oracle.com.     The Oracle Optimized Solution for Oracle E-Business Suite This solution was centered around the engineered system, SPARC SuperCluster T4-4. Check the business and technical white papers along with a bunch of relevant useful resources online at the above optimized solution page for EBS. What is an Optimized Solution? Oracle's Optimized Solutions are designed, tested and fully documented architectures that are tuned for optimal performance and availability. Optimized solutions are NOT pre-packaged, fully tuned, ready-to-install software bundles that can be downloaded and installed. An optimized solution is usually a well documented architecture that was thoroughly tested on a target platform. The technical white paper details the deployed application architecture along with various observations from installing the application on target platform to its behavior and performance in highly available and scalable configurations. Oracle E-Business Suite R12 Use Case Multiple E-Business Suite R12 12.1.3 application modules were tested in this optimized solution -- Financials (online - oracle forms & web requests), Order Management (online - oracle forms & web req uests) and HRMS (online - web requests & payroll batch). The solution will be updated with additional application modules, when they are available. Oracle Solaris Cluster is responsible for the high availability portion of the solution. Performance Data For the sake of completeness, test results were also documented in the optimized solution white paper. Those test results are mainly for educational purposes only. They give good sense of application behavior under the circumstances the application was tested. Since the major focus of the optimized solution is around highly available and scalable configurations, the application was configured to me et those criteria. Hence the documented test results are not directly comparable to any other E-Business Suite performance test results published by any vendor including Oracle. Such an attempt may lead to skewed, incorrect conclusions. Questions & Requests Feel free to direct your questions to the author of the white papers. If you are a potential customer who would like to test a specific E-Business Suite application module on any non-engineered syste m such as SPARC T4-X or engineered system such as SPARC SuperCluster, contact Oracle Solution Center.

    Read the article

  • Update: TFS Power Tools March 2011

    - by Enrique Lima
    There is an update available for the TFS Power Tools and the TFS Build Power Tools. Among the updates to the Tools: Changes to the Team Foundation Server Backups Add-In for TFS Admin Console. Added functionality to the Windows Shell Extension. Changes to the tfpt command line tool that allows you to script build management commands. For a full detail of the changes, read Brian Harry’s post  http://blogs.msdn.com/b/bharry/archive/2011/03/03/mar-11-team-foundation-server-power-tools-are-available.aspx To download the Power Tools: Team Foundation Server Power Tools Team Foundation Server Build Extensions Power Tool

    Read the article

  • Silverlight Relay Commands

    - by George Evjen
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} I am fairly new at Silverlight development and I usually have an issue that needs research every day. Which I enjoy, since I like the idea of going into a day knowing that I am  going to learn something new. The issue that I am currently working on centers around relay commands. I have a pretty good handle on Relay Commands and how we use them within our applications. <Button Command="{Binding ButtonCommand}" CommandParameter="NewRecruit" Content="New Recruit" /> Here in our xaml we have a button. The button has a Command and a CommandParameter. The command binds to the ButtonCommand that we have in our ViewModel RelayCommand _buttonCommand;         /// <summary>         /// Gets the button command.         /// </summary>         /// <value>The button command.</value>         public RelayCommand ButtonCommand         {             get             {                 if (_buttonCommand == null)                 {                     _buttonCommand = new RelayCommand(                         x => x != null && x.ToString().Length > 0 && CheckCommandAvailable(x.ToString()),                         x => ExecuteCommand(x.ToString()));                 }                 return _buttonCommand;             }         }   In our relay command we then do some checks with a lambda expression. We check if the command  parameter is null, is the length greater than 0 and we have a CheckCommandAvailable method that will tell  us if the button is even enabled. After we check on these three items we then pass the command parameter to an action method. This is all pretty straight forward, the issue that we solved a few days ago centered around having a control that needed to use a Relay Command and this control was a nested control and was using a different DataContext. The example below illustrates how we handled this scenario. In our xaml usercontrol we had to set a name to this control. <Controls3:RadTileViewItem x:Class="RecruitStatusTileView"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"      xmlns:Controls1="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls"      xmlns:Controls2="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Input"      xmlns:Controls3="clr-namespace:Telerik.Windows.Controls;assembly=Telerik.Windows.Controls.Navigation"      mc:Ignorable="d" d:DesignHeight="400" d:DesignWidth="800" Header="{Binding Title,Mode=TwoWay}" MinimizedHeight="100"                             x:Name="StatusView"> Here we are using a telerik RadTileViewItem. We set the name of this control to “StatusView”. In our button control we set our command parameters and commands different than the example above. <HyperlinkButton Content="{Binding BigBoardButtonText, Mode=TwoWay}" CommandParameter="{Binding 'Position.PositionName'}" Command="{Binding ElementName=StatusView, Path=DataContext.BigBoardCommand, Mode=TwoWay}" /> This hyperlink button lives in a ListBox control and this listbox has an ItemSource of PositionSelectors. The Command Parameter is binding to the Position.Position property of that PositionSelectors object. This again is pretty straight forward again. What gets a bit tricky is the Command property in the hyperlink. It is binding to the element name we created in the user control (StatusView) Because this hyperlink is in a listbox and is in the item template it doesn’t have a direct handle on the DataContext that the RadTileViewItem has so we have to make sure it does. We do that by binding to the element name of status view then set the path to DataContext.BigBoardCommand. BigBoardCommand is the name of the RelayCommand in the view model. private RelayCommand _bigBoardCommand = null;         /// <summary>         /// Gets the big board command.         /// </summary>         /// <value>The big board command.</value>         public RelayCommand BigBoardCommand         {             get             {                 if (_bigBoardCommand == null)                 {                     _bigBoardCommand = new RelayCommand(x => true, x => AddToBigBoard(x.ToString()));                 }                 return _bigBoardCommand;             }         } From there we check for true again and then call the action and pass in the parameter that we had as the command parameter. What we are working on now is a bit trickier than this second example. In the above example we are only creating this TileViewItem with this name “StatusView” once. In another part of our application we are generating multiple TileViewItems, so we cannot set the name in the control as we cant have multiple controls with the same name. When we run the application we get an error that reads that the value is out of expected range. My searching has led me to think we cannot have multiple controls with the same name. This is today’s problem and Ill post the solution to this once it is found.

    Read the article

  • Stepping outside Visual Studio IDE [Part 1 of 2] with Eclipse

    - by mbcrump
    “If you're walking down the right path and you're willing to keep walking, eventually you'll make progress." – Barack Obama In my quest to become a better programmer, I’ve decided to start the process of learning Java. I will be primary using the Eclipse Language IDE. I will not bore you with the history just what is needed for a .NET developer to get up and running. I will provide links, screenshots and a few brief code tutorials. Links to documentation. The Official Eclipse FAQ’s Links to binaries. Eclipse IDE for Java EE Developers the Galileo Package (based on Eclipse 3.5 SR2)  Sun Developer Network – Java Eclipse officially recommends Java version 5 (also known as 1.5), although many Eclipse users use the newer version 6 (1.6). That's it, nothing more is required except to compile and run java. Installation Unzip the Eclipse IDE for Java EE Developers and double click the file named Eclipse.exe. You will probably want to create a link for it on your desktop. Once, it’s installed and launched you will have to select a workspace. Just accept the defaults and you will see the following: Lets go ahead and write a simple program. To write a "Hello World" program follow these steps: Start Eclipse. Create a new Java Project: File->New->Project. Select "Java" in the category list. Select "Java Project" in the project list. Click "Next". Enter a project name into the Project name field, for example, "HW Project". Click "Finish" Allow it to open the Java perspective Create a new Java class: Click the "Create a Java Class" button in the toolbar. (This is the icon below "Run" and "Window" with a tooltip that says "New Java Class.") Enter "HW" into the Name field. Click the checkbox indicating that you would like Eclipse to create a "public static void main(String[] args)" method. Click "Finish". A Java editor for HW.java will open. In the main method enter the following line.      System.out.println("This is my first java program and btw Hello World"); Save using ctrl-s. This automatically compiles HW.java. Click the "Run" button in the toolbar (looks like a VCR play button). You will be prompted to create a Launch configuration. Select "Java Application" and click "New". Click "Run" to run the Hello World program. The console will open and display "This is my first java program and btw Hello World". You now have your first java program, lets go ahead and make an applet. Since you already have the HW.java open, click inside the window and remove all code. Now copy/paste the following code snippet. Java Code Snippet for an applet. 1: import java.applet.Applet; 2: import java.awt.Graphics; 3: import java.awt.Color; 4:  5: @SuppressWarnings("serial") 6: public class HelloWorld extends Applet{ 7:  8: String text = "I'm a simple applet"; 9:  10: public void init() { 11: text = "I'm a simple applet"; 12: setBackground(Color.GREEN); 13: } 14:  15: public void start() { 16: System.out.println("starting..."); 17: } 18:  19: public void stop() { 20: System.out.println("stopping..."); 21: } 22:  23: public void destroy() { 24: System.out.println("preparing to unload..."); 25: } 26:  27: public void paint(Graphics g){ 28: System.out.println("Paint"); 29: g.setColor(Color.blue); 30: g.drawRect(0, 0, 31: getSize().width -1, 32: getSize().height -1); 33: g.setColor(Color.black); 34: g.drawString(text, 15, 25); 35: } 36: } The Eclipse IDE should look like Click "Run" to run the Hello World applet. Now, lets test our new java applet. So, navigate over to your workspace for example: “C:\Users\mbcrump\workspace\HW Project\bin” and you should see 2 files. HW.class java.policy.applet Create a HTML page with the following code: 1: <HTML> 2: <BODY> 3: <APPLET CODE=HW.class WIDTH=200 HEIGHT=100> 4: </APPLET> 5: </BODY> 6: </HTML> Open, the HTML page in Firefox or IE and you will see your applet running.  I hope this brief look at the Eclipse IDE helps someone get acquainted with Java Development. Even if your full time gig is with .NET, it will not hurt to have another language in your tool belt. As always, I welcome any suggestions or comments.

    Read the article

  • Using Stored Procedures in SSIS

    - by dataintegration
    The SSIS Data Flow components: the source task and the destination task are the easiest way to transfer data in SSIS. Some data transactions do not fit this model, they are procedural tasks modeled as stored procedures. In this article we show how you can call stored procedures available in RSSBus ADO.NET Providers from SSIS. In this article we will use the CreateJob and the CreateBatch stored procedures available in RSSBus ADO.NET Provider for Salesforce, but the same steps can be used to call a stored procedure in any of our data providers. Step 1: Open Visual Studio and create a new Integration Services Project. Step 2: Add a new Data Flow Task to the Control Flow window. Step 3: Open the Data Flow Task and add a Script Component to the data flow pane. A dialog box will pop-up allowing you to select the Script Component Type: pick the source type as we will be outputting columns from our stored procedure. Step 4: Double click the Script Component to open the editor. Step 5: In the "Inputs and Outputs" settings, enter all the columns you want to output to the data flow. Ensure the correct data type has been set for each output. You can check the data type by selecting the output and then changing the "DataType" property from the property editor. In our example, we'll add the column JobID of type String. Step 6: Select the "Script" option in the left-hand pane and click the "Edit Script" button. This will open a new Visual Studio window with some boiler plate code in it. Step 7: In the CreateOutputRows() function you can add code that executes the stored procedures included with the Salesforce Component. In this example we will be using the CreateJob and CreateBatch stored procedures. You can find a list of the available stored procedures along with their inputs and outputs in the product help. //Configure the connection string to your credentials String connectionString = "Offline=False;user=myusername;password=mypassword;access token=mytoken;"; using (SalesforceConnection conn = new SalesforceConnection(connectionString)) { //Create the command to call the stored procedure CreateJob SalesforceCommand cmd = new SalesforceCommand("CreateJob", conn); cmd.CommandType = CommandType.StoredProcedure; cmd.Parameters.Add(new SalesforceParameter("ObjectName", "Contact")); cmd.Parameters.Add(new SalesforceParameter("Action", "insert")); //Execute CreateJob //CreateBatch requires JobID as input so we store this value for later SalesforceDataReader rdr = cmd.ExecuteReader(); String JobID = ""; while (rdr.Read()) { JobID = (String)rdr["JobID"]; } //Create the command for CreateBatch, for this example we are adding two new rows SalesforceCommand batCmd = new SalesforceCommand("CreateBatch", conn); batCmd.CommandType = CommandType.StoredProcedure; batCmd.Parameters.Add(new SalesforceParameter("JobID", JobID)); batCmd.Parameters.Add(new SalesforceParameter("Aggregate", "<Contact><Row><FirstName>Bill</FirstName>" + "<LastName>White</LastName></Row><Row><FirstName>Bob</FirstName><LastName>Black</LastName></Row></Contact>")); //Execute CreateBatch SalesforceDataReader batRdr = batCmd.ExecuteReader(); } Step 7b: If you had specified output columns earlier, you can now add data into them using the UserComponent Output0Buffer. For example, we had set an output column called JobID of type String so now we can set a value for it. We will modify the DataReader that contains the output of CreateJob like so:. while (rdr.Read()) { Output0Buffer.AddRow(); JobID = (String)rdr["JobID"]; Output0Buffer.JobID = JobID; } Step 8: Note: You will need to modify the connection string to include your credentials. Also ensure that the System.Data.RSSBus.Salesforce assembly is referenced and include the following using statements to the top of the class: using System.Data; using System.Data.RSSBus.Salesforce; Step 9: Once you are done editing your script, save it, and close the window. Click OK in the Script Transformation window to go back to the main pane. Step 10: If had any outputs from the Script Component you can use them in your data flow. For example we will use a Flat File Destination. Configure the Flat File Destination to output the results to a file, and you should see the JobId in the file. Step 11: Your project should be ready to run.

    Read the article

  • Computer Networks UNISA - Chap 8 &ndash; Wireless Networking

    - by MarkPearl
    After reading this section you should be able to Explain how nodes exchange wireless signals Identify potential obstacles to successful transmission and their repercussions, such as interference and reflection Understand WLAN architecture Specify the characteristics of popular WLAN transmission methods including 802.11 a/b/g/n Install and configure wireless access points and their clients Describe wireless MAN and WAN technologies, including 802.16 and satellite communications The Wireless Spectrum All wireless signals are carried through the air by electromagnetic waves. The wireless spectrum is a continuum of the electromagnetic waves used for data and voice communication. The wireless spectrum falls between 9KHZ and 300 GHZ. Characteristics of Wireless Transmission Antennas Each type of wireless service requires an antenna specifically designed for that service. The service’s specification determine the antenna’s power output, frequency, and radiation pattern. A directional antenna issues wireless signals along a single direction. An omnidirectional antenna issues and receives wireless signals with equal strength and clarity in all directions The geographical area that an antenna or wireless system can reach is known as its range Signal Propagation LOS (line of sight) uses the least amount of energy and results in the reception of the clearest possible signal. When there is an obstacle in the way, the signal may… pass through the object or be obsrobed by the object or may be subject to reflection, diffraction or scattering. Reflection – waves encounter an object and bounces off it. Diffraction – signal splits into secondary waves when it encounters an obstruction Scattering – is the diffusion or the reflection in multiple different directions of a signal Signal Degradation Fading occurs as a signal hits various objects. Because of fading, the strength of the signal that reaches the receiver is lower than the transmitted signal strength. The further a signal moves from its source, the weaker it gets (this is called attenuation) Signals are also affected by noise – the electromagnetic interference) Interference can distort and weaken a wireless signal in the same way that noise distorts and weakens a wired signal. Frequency Ranges Older wireless devices used the 2.4 GHZ band to send and receive signals. This had 11 communication channels that are unlicensed. Newer wireless devices can also use the 5 GHZ band which has 24 unlicensed bands Narrowband, Broadband, and Spread Spectrum Signals Narrowband – a transmitter concentrates the signal energy at a single frequency or in a very small range of frequencies Broadband – uses a relatively wide band of the wireless spectrum and offers higher throughputs than narrowband technologies The use of multiple frequencies to transmit a signal is known as spread-spectrum technology. In other words a signal never stays continuously within one frequency range during its transmission. One specific implementation of spread spectrum is FHSS (frequency hoping spread spectrum). Another type is known as DSS (direct sequence spread spectrum) Fixed vs. Mobile Each type of wireless communication falls into one of two categories Fixed – the location of the transmitted and receiver do not move (results in energy saved because weaker signal strength is possible with directional antennas) Mobile – the location can change WLAN (Wireless LAN) Architecture There are two main types of arrangements Adhoc – data is sent directly between devices – good for small local devices Infrastructure mode – a wireless access point is placed centrally, that all devices connect with 802.11 WLANs The most popular wireless standards used on contemporary LANs are those developed by IEEE’s 802.11 committee. Over the years several distinct standards related to wireless networking have been released. Four of the best known standards are also referred to as Wi-Fi. They are…. 802.11b 802.11a 802.11g 802.11n These four standards share many characteristics. i.e. All 4 use half duplex signalling Follow the same access method Access Method 802.11 standards specify the use of CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance) to access a shared medium. Using CSMA/CA before a station begins to send data on an 802.11 network, it checks for existing wireless transmissions. If the source node detects no transmission activity on the network, it waits a brief period of time and then sends its transmission. If the source does detect activity, it waits a brief period of time before checking again. The destination node receives the transmission and, after verifying its accuracy, issues an acknowledgement (ACT) packet to the source. If the source receives the ACK it assumes the transmission was successful, – if it does not receive an ACK it assumes the transmission failed and sends it again. Association Two types of scanning… Active – station transmits a special frame, known as a prove, on all available channels within its frequency range. When an access point finds the probe frame, it issues a probe response. Passive – wireless station listens on all channels within its frequency range for a special signal, known as a beacon frame, issued from an access point – the beacon frame contains information necessary to connect to the point. Re-association occurs when a mobile user moves out of one access point’s range and into the range of another. Frames Read page 378 – 381 about frames and specific 802.11 protocols Bluetooth Networks Sony Ericson originally invented the Bluetooth technology in the early 1990s. In 1998 other manufacturers joined Ericsson in the Special Interest Group (SIG) whose aim was to refine and standardize the technology. Bluetooth was designed to be used on small networks composed of personal communications devices. It has become popular wireless technology for communicating among cellular telephones, phone headsets, etc. Wireless WANs and Internet Access Refer to pages 396 – 402 of the textbook for details.

    Read the article

  • Add SQL Azure database to Azure Web Role and persist data with entity framework code first.

    - by MagnusKarlsson
    In my last post I went for a warts n all approach to set up a web role on Azure. In this post I’ll describe how to add an SQL Azure database to the project. This will be described with an as minimal as possible amount of code and screen dumps. All questions are welcome in the comments area. Please don’t email since questions answered in the comments field is made available to other visitors. As an example we will add a comments section to the site we used in the previous post (Länk här). Steps: 1. Create a Comments entity and then use Scaffolding to set up controller and view, and add ConnectionString to web.config. 2. Create SQL Azure database in Management Portal and link the new database 3. Test it online!   1. Right click Models folder, choose add, choose “class…” . Name the Class Comment. 1.1 Replace the Code in the class with the following: using System.Data.Entity; namespace MvcWebRole1.Models { public class Comment {    public int CommentId { get; set; }    public string Name { get; set; }      public string Content { get; set; } } public class CommentsDb : DbContext { public DbSet<Comment> CommentEntries { get; set; } } } Now Entity Framework can create a database and a table named Comment. Build your project to assert there are no build errors.   1.2 Right click Controllers folder, choose add, choose “class…” . Name the Class CommentController and fill out the values as in the example below.     1.3 Click Add. Visual Studio now creates default View for CRUD operations and a Controller adhering to these and opens them. 1.3 Open Web.config and add the following connectionstring in <connectionStrings> node. <add name="CommentsDb” connectionString="data source=(LocalDB)\v11.0;Integrated Security=SSPI;AttachDbFileName=|DataDirectory|\CommentsDb.mdf;Initial Catalog=CommentsDb;MultipleActiveResultSets=True" providerName="System.Data.SqlClient" />   1.4 Save All and press F5 to start the application. 1.5 Go to http://127.0.0.1:81/Comments which will redirect you through CommentsController to the Index View which looks like this:     Click Create new. In the Create-view, add name and content and press Create.   1: // 2: // POST: /Comments/Create 3:  4: [HttpPost] 5: public ActionResult Create(Comment comment) 6: { 7: if (ModelState.IsValid) 8: { 9: db.CommentEntries.Add(comment); 10: db.SaveChanges(); 11: return RedirectToAction("Index"); 12: } 13:  14: return View(comment); 15: } 16:    The default View() is Index so that is the View you will come to. Looking like this: 1: // 2: // GET: /Comments/ 3: 4: public ActionResult Index() 5: { 6: return View(db.CommentEntries.ToList()); 7: } Resulting in the following screen dump(success!):   2. Now, go to the Management portal and Create a new db.   2.1 With the new database created. Click the DB icon in the left most menu. Then click the newly created database. Click DASHBOARD in the top menu. Finally click Connections strings in the right menu to get the connection string we need to add in our web.debug.config file.   2.2 Now, take a copy of the connection String earlier added to the web.config and paste in web.debug.conifg in the connectionstrings node. Replace everything within “ “ in the copied connectionstring with that you got from SQL Azure. You will have something like this:   2.3 Rebuild the application, right click the cloud project and choose “Package…” (if you haven’t set up publishing profile which we will do in our next blog post). Remember to choose the right config file, use debug for staging and release for production so your databases won’t collide. You should see something like this:   2.4 Go to Management Portal and click the Web Services menu, choose your service and click update in the bottom menu.   2.5 Link the newly created database to your application. Click the LINKED RESOURCES in the top menu and then click “Link” in the bottom menu. You should get something like this. 3. Alright then. Under the Dashboard you can find the link to your application. Click it to open it in a browser and then go to ~/Comments to try it out just the way we did locally. Success and end of this story!

    Read the article

  • useFastClick in JQuery Mobile

    - by Yousef_Jadallah
      Normal 0 false false false EN-US X-NONE AR-SA /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:Arial; mso-bidi-theme-font:minor-bidi;} For who want to convert the application from JQM Alpha to JQM Beta 1, needs to bind  click  events to the new vclick one. Click event is working in general browsers butt that is needed for iOS and Android, useFastClick  is (touch + mouse click). Moreover if you have this event alot in your project you can turn useFastClick off in mobileinit event: $(document).bind("mobileinit", function () {             $.mobile.useFastClick = false; });   vclick event is needed to support touch events to make the page changes to happen faster, and to perform the URL hiding. So you need to change something like this  $('btnShow').live("click", function (evt) {   To :  $('btnShow').live("vclick", function (evt) {     For more information : http://jquerymobile.com/test/docs/api/globalconfig.html   Here you can find full example in this case : <!DOCTYPE ><html xmlns="http://www.w3.org/1999/xhtml"><head>    <link rel="stylesheet" href="http://code.jquery.com/mobile/1.0b1/jquery.mobile-1.0b1.min.css" />    <script src="http://code.jquery.com/jquery-1.6.1.min.js"></script>    <script src="http://code.jquery.com/mobile/1.0b1/jquery.mobile-1.0b1.min.js"></script>    <script type="text/javascript">     //Here you need to use vclick instead of click event         $('ul[id="MylistView"] a').live("vclick", function (evt) {            alert('list click');        });      </script>    <title></title></head><body>    <div id="FirstPage" data-role="page" data-theme="b">        <div data-role="header">            <h1>                Page Title</h1>        </div>        <div data-role="content">            <ul id="MylistView" data-role="listview" data-theme="g">                <li><a href="#SecondPage">Acura</a></li>                <li><a href="#SecondPage">Audi</a></li>                <li><a href="#SecondPage">BMW</a></li>            </ul>        </div>        <div data-role="footer">            <h4>                Page Footer</h4>        </div>    </div>    <div id="SecondPage" data-role="page" data-theme="b"   >        <div data-role="header" >            <h1>                Page Title</h1>        </div>        Second Page        <div data-role="footer">            <h4>                Page Footer</h4>        </div>    </div></body></html>     Hope that helps.

    Read the article

  • T-SQL Tuesday #34: Help! I Need Somebody!

    - by Most Valuable Yak (Rob Volk)
    Welcome everyone to T-SQL Tuesday Episode 34!  When last we tuned in, Mike Fal (b|t) hosted Trick Shots.  These highlighted techniques or tricks that you figured out on your own which helped you understand SQL Server better. This month, I'm asking you to look back this past week, year, century, or hour...to a time when you COULDN'T figure it out.  When you were stuck on a SQL Server problem and you had to seek help. In the beginning... SQL Server has changed a lot since I started with it.  <Cranky Old Guy> Back in my day, Books Online was neither.  There were no blogs. Google was the third-place search site. There were perhaps two or three community forums where you could ask questions.  (Besides the Microsoft newsgroups...which you had to access with Usenet.  And endure the wrath of...Celko.)  Your "training" was reading a book, made from real dead trees, that you bought from your choice of brick-and-mortar bookstore. And except for your local user groups, there were no conferences, seminars, SQL Saturdays, or any online video hookups where you could interact with a person. You'd have to call Microsoft Support...on the phone...a LANDLINE phone.  And none of this "SQL Family" business!</Cranky Old Guy> Even now, with all these excellent resources available, it's still daunting for a beginner to seek help for SQL Server.  The product is roughly 1247.4523 times larger than it was 15 years ago, and it's simply impossible to know everything about it.*  So whether you are a beginner, or a seasoned pro of over a decade's experience, what do you do when you need help on SQL Server? That's so meta... In the spirit of offering help, here are some suggestions for your topic: Tell us about a person or SQL Server community who have been helpful to you.  It can be about a technical problem, or not, e.g. someone who volunteered for your local SQL Saturday.  Sing their praises!  Let the world know who they are! Do you have any tricks for using Books Online?  Do you use the locally installed product, or are you completely online with BOL/MSDN/Technet, and why? If you've been using SQL Server for over 10 years, how has your help-seeking changed? Are you using Twitter, StackOverflow, MSDN Forums, or another resource that didn't exist when you started? What made you switch? Do you spend more time helping others than seeking help? What motivates you to help, and how do you contribute? Structure your post along the lyrics to The Beatles song Help! Audio or video renditions are particularly welcome! Lyrics must include reference to SQL Server terminology or community, and performances must be in your voice or include you playing an instrument. These are just suggestions, you are free to write whatever you like.  Bonus points if you can incorporate ALL of these into a single post.  (Or you can do multiple posts, we're flexible like that.)  Help us help others by showing how others helped you! Legalese, Your Rights, Yada yada... If you would like to participate in T-SQL Tuesday please be sure to follow the rules below: Your blog post must be published between Tuesday, September 11, 2012 00:00:00 GMT and Wednesday, September 12, 2012 00:00:00 GMT. Include the T-SQL Tuesday logo (above) and hyperlink it back to this post. If you don’t see your post in trackbacks, add the link to the comments below. If you are on Twitter please tweet your blog using the #TSQL2sDay hashtag.  I can be contacted there as @sql_r, in case you have questions or problems with comments/trackback.  I'll have a follow-up post listing all the contributions as soon as I can. Thank you all for participating, and special thanks to Adam Machanic (b|t) for all his help and for continuing this series!

    Read the article

  • Cuda GeForce GT 555m (on Ubuntu 12.04 (Virtual Box))

    - by bobby
    I am running Ubuntu 12.04 in a Virtual Box as guest OS. Today I have tried to enable all functions of my graphic card under Ubuntu. But all my efforts were off the mark... because my experience with Linux is relatively small... I have tried the procedure which is described as in http://sn0v.wordpress.com/2012/05/11/installing-cuda-on-ubuntu-12-04/. Running the cuda-filepack always lead to an error like "installing driver canceled". Then I have tried to install the Bumblebee project. But I always get the error that the bumblebee daemon is not started. After some internet research I found a test-command lspci -nn | grep '[030[02]]' which results in 00:02.0 VGA compatible controller [0300]: InnoTek ... [80ee:beef] I've been trying to activate the card you for hours without success. Could you please give me some hints how to activate CUDA? Kind regards

    Read the article

  • Remote Workers...We're Not That Bad!

    - by user12601034
    I work from home a lot – my team is located in different cities and countries, my manager is in a different city, and most of our work is done via conference calls, email and collaboration through Oracle Social Network. We’ve figured out how to be effective and involve team members, regardless of where we are all located. When I mention that I work from home, a lot of my friends will laugh, roll their eyes or use their fingers to make quotation marks around “work from home.” Their belief is that I’m sitting at home, eating bon-bons and watching television. The attempts at humor only multiply when they know that my husband also mostly works from home. So, it was with great joy that I read the Lifehacker article Why Remote Workers Are More (Yes, More) Engaged. I’m not going to re-write the article for you, but four highlights from the article include: Proximity breeds complacency –because communicating with employees sitting next to you is so easy, you may not do it well. Absence makes people try harder to connect – because you have to make an effort to connect to your team, you tend to pay better attention when you do connect Leaders of virtual team make better use of tools – when working remotely, you will use technology (many different forms of it) to connect with your team. This daily use of the tools makes you more proficient with those tools Leaders of far-flung teams maximize the time spent together – getting together takes effort, time and money, so leaders tend to filter out distractions when teams do get together. These points made me happy because I’ve seen the same things play out in my team located around the world. And I’m not saying that a virtual team is more effective than a co-located team – but my virtual team doesn’t have the option of filing into a conference room for a face-to-face meeting whenever we want. Instead, we have to figure out how to work effectively without meeting face-to-face. Am I more engaged as a remote worker? I’d like to think that I am. I’ve been on calls with colleagues at 3am – this would never happen if my only option was to be in the office. I can leave my “office” to pick up my kids from school…and I’m willingly back online after kids are in bed to finish up anything I need to. Oracle Social Network lets me use my iPad to engage with my teammates when I’m waiting at music lessons, the doctor’s office or any place else with a network connection. I feel like I’m more connected with my team, and I feel like I’m more connected with my family life. So yes, I am a remote worker, and I am engaged. If you lead a virtual team, I challenge you to increase the ways that you communicate to effectively engage your team. If you are on a virtual team, I challenge you to think about how you might interact with team members to keep both them and yourself engaged in your work. And if you have some great ideas on how to make virtual teams (and workers) effective and engaged, please share those ideas in the comments! Now, if you’ll excuse me, I need to go get a bon-bon...   :) Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Centered Content using panelGridLayout

    - by Duncan Mills
    A classic layout conundrum,  which I think pretty much every ADF developer may have faced at some time or other, is that of truly centered (centred) layout. Typically this requirement comes up in relation to say displaying a login type screen or similar. Superficially the  problem seems easy, but as my buddy Eduardo explained when discussing this subject a couple of years ago it's actually a little more complex than you might have thought. If fact, even the "solution" provided in that posting is not perfect and suffers from a several issues (not Eduardo's fault, just limitations of panelStretch!) The top, bottom, end and start facets all need something in them The percentages you apply to the topHeight, startWidth etc. are calculated as part of the whole width.  This means that you have to guestimate the correct percentage based on your typical screen size and the sizing of the centered content. So, at best, you will in fact only get approximate centering, and the more you tune that centering for a particular browser size the more it will fail if the user resizes. You can't attach styles to the panelStretchLayout facets so to provide things like background color or fixed sizing you need to embed another container that you can apply styles to, typically a panelgroupLayout   For reference here's the code to print a simple 100px x 100px red centered square  using the panelStretchLayout solution, approximately tuned to a 1980 x 1080 maximized browser (IDs omitted for brevity): <af:panelStretchLayout startWidth="45%" endWidth="45%"                        topHeight="45%"  bottomHeight="45%" >   <f:facet name="center">     <af:panelGroupLayout inlineStyle="height:100px;width:100px;background-color:red;"                          layout="vertical"/>   </f:facet>   <f:facet name="top">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="bottom">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="start">     <af:spacer height="1" width="1"/>   </f:facet>   <f:facet name="end">     <af:spacer height="1" width="1"/>    </f:facet> </af:panelStretchLayout>  And so to panelGridLayout  So here's the  good news, panelGridLayout makes this really easy and it works without the caveats above.  The key point is that percentages used in the grid definition are evaluated after the fixed sizes are taken into account, so rather than having to guestimate what percentage will "more, or less", center the content you can just say "allocate half of what's left" to the flexible content and you're done. Here's the same example using panelGridLayout: <af:panelGridLayout> <af:gridRow height="50%"/> <af:gridRow height="100px"> <af:gridCell width="50%" /> <af:gridCell width="100px" halign="stretch" valign="stretch"  inlineStyle="background-color:red;"> <af:spacer width="1" height="1"/> </af:gridCell> <af:gridCell width="50%" /> </af:gridRow> <af:gridRow height="50%"/> </af:panelGridLayout>  So you can see that the amount of markup is somewhat smaller (as is, I should mention, the generated DOM structure in the browser), mainly because we don't need to introduce artificial components to ensure that facets are actually observed in the final result.  But the key thing here is that the centering is no longer approximate and it will work as expected as the user resizes the browser screen.  By far this is a more satisfactory solution and although it's only a simple example, it will hopefully open your eyes to the potential of panelGridLayout as your number one, go-to layout container. Just a reminder though, right now, panelGridLayout is only available in 11.1.2.2 and above.

    Read the article

  • Data management in unexpected places

    - by Ashok_Ora
    Normal 0 false false false EN-US X-NONE X-NONE Data management in unexpected places When you think of network switches, routers, firewall appliances, etc., it may not be obvious that at the heart of these kinds of solutions is an engine that can manage huge amounts of data at very high throughput with low latencies and high availability. Consider a network router that is processing tens (or hundreds) of thousands of network packets per second. So what really happens inside a router? Packets are streaming in at the rate of tens of thousands per second. Each packet has multiple attributes, for example, a destination, associated SLAs etc. For each packet, the router has to determine the address of the next “hop” to the destination; it has to determine how to prioritize this packet. If it’s a high priority packet, then it has to be sent on its way before lower priority packets. As a consequence of prioritizing high priority packets, lower priority data packets may need to be temporarily stored (held back), but addressed fairly. If there are security or privacy requirements associated with the data packet, those have to be enforced. You probably need to keep track of statistics related to the packets processed (someone’s sure to ask). You have to do all this (and more) while preserving high availability i.e. if one of the processors in the router goes down, you have to have a way to continue processing without interruption (the customer won’t be happy with a “choppy” VoIP conversation, right?). And all this has to be achieved without ANY intervention from a human operator – the router is most likely to be in a remote location – it must JUST CONTINUE TO WORK CORRECTLY, even when bad things happen. How is this implemented? As soon as a packet arrives, it is interpreted by the receiving software. The software decodes the packet headers in order to determine the destination, kind of packet (e.g. voice vs. data), SLAs associated with the “owner” of the packet etc. It looks up the internal database of “rules” of how to process this packet and handles the packet accordingly. The software might choose to hold on to the packet safely for some period of time, if it’s a low priority packet. Ah – this sounds very much like a database problem. For each packet, you have to minimally · Look up the most efficient next “hop” towards the destination. The “most efficient” next hop can change, depending on latency, availability etc. · Look up the SLA and determine the priority of this packet (e.g. voice calls get priority over data ftp) · Look up security information associated with this data packet. It may be necessary to retrieve the context for this network packet since a network packet is a small “slice” of a session. The context for the “header” packet needs to be stored in the router, in order to make this work. · If the priority of the packet is low, then “store” the packet temporarily in the router until it is time to forward the packet to the next hop. · Update various statistics about the packet. In most cases, you have to do all this in the context of a single transaction. For example, you want to look up the forwarding address and perform the “send” in a single transaction so that the forwarding address doesn’t change while you’re sending the packet. So, how do you do all this? Berkeley DB is a proven, reliable, high performance, highly available embeddable database, designed for exactly these kinds of usage scenarios. Berkeley DB is a robust, reliable, proven solution that is currently being used in these scenarios. First and foremost, Berkeley DB (or BDB for short) is very very fast. It can process tens or hundreds of thousands of transactions per second. It can be used as a pure in-memory database, or as a disk-persistent database. BDB provides high availability – if one board in the router fails, the system can automatically failover to another board – no manual intervention required. BDB is self-administering – there’s no need for manual intervention in order to maintain a BDB application. No need to send a technician to a remote site in the middle of nowhere on a freezing winter day to perform maintenance operations. BDB is used in over 200 million deployments worldwide for the past two decades for mission-critical applications such as the one described here. You have a choice of spending valuable resources to implement similar functionality, or, you could simply embed BDB in your application and off you go! I know what I’d do – choose BDB, so I can focus on my business problem. What will you do? /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • C#/.NET Little Wonders: Getting Caller Information

    - by James Michael Hare
    Originally posted on: http://geekswithblogs.net/BlackRabbitCoder/archive/2013/07/25/c.net-little-wonders-getting-caller-information.aspx Once again, in this series of posts I look at the parts of the .NET Framework that may seem trivial, but can help improve your code by making it easier to write and maintain. The index of all my past little wonders posts can be found here. There are times when it is desirable to know who called the method or property you are currently executing.  Some applications of this could include logging libraries, or possibly even something more advanced that may server up different objects depending on who called the method. In the past, we mostly relied on the System.Diagnostics namespace and its classes such as StackTrace and StackFrame to see who our caller was, but now in C# 5, we can also get much of this data at compile-time. Determining the caller using the stack One of the ways of doing this is to examine the call stack.  The classes that allow you to examine the call stack have been around for a long time and can give you a very deep view of the calling chain all the way back to the beginning for the thread that has called you. You can get caller information by either instantiating the StackTrace class (which will give you the complete stack trace, much like you see when an exception is generated), or by using StackFrame which gets a single frame of the stack trace.  Both involve examining the call stack, which is a non-trivial task, so care should be done not to do this in a performance-intensive situation. For our simple example let's say we are going to recreate the wheel and construct our own logging framework.  Perhaps we wish to create a simple method Log which will log the string-ified form of an object and some information about the caller.  We could easily do this as follows: 1: static void Log(object message) 2: { 3: // frame 1, true for source info 4: StackFrame frame = new StackFrame(1, true); 5: var method = frame.GetMethod(); 6: var fileName = frame.GetFileName(); 7: var lineNumber = frame.GetFileLineNumber(); 8: 9: // we'll just use a simple Console write for now 10: Console.WriteLine("{0}({1}):{2} - {3}", 11: fileName, lineNumber, method.Name, message); 12: } So, what we are doing here is grabbing the 2nd stack frame (the 1st is our current method) using a 2nd argument of true to specify we want source information (if available) and then taking the information from the frame.  This works fine, and if we tested it out by calling from a file such as this: 1: // File c:\projects\test\CallerInfo\CallerInfo.cs 2:  3: public class CallerInfo 4: { 5: Log("Hello Logger!"); 6: } We'd see this: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! This works well, and in fact CallStack and StackFrame are still the best ways to examine deeper into the call stack.  But if you only want to get information on the caller of your method, there is another option… Determining the caller at compile-time In C# 5 (.NET 4.5) they added some attributes that can be supplied to optional parameters on a method to receive caller information.  These attributes can only be applied to methods with optional parameters with explicit defaults.  Then, as the compiler determines who is calling your method with these attributes, it will fill in the values at compile-time. These are the currently supported attributes available in the  System.Runtime.CompilerServices namespace": CallerFilePathAttribute – The path and name of the file that is calling your method. CallerLineNumberAttribute – The line number in the file where your method is being called. CallerMemberName – The member that is calling your method. So let’s take a look at how our Log method would look using these attributes instead: 1: static int Log(object message, 2: [CallerMemberName] string memberName = "", 3: [CallerFilePath] string fileName = "", 4: [CallerLineNumber] int lineNumber = 0) 5: { 6: // we'll just use a simple Console write for now 7: Console.WriteLine("{0}({1}):{2} - {3}", 8: fileName, lineNumber, memberName, message); 9: } Again, calling this from our sample Main would give us the same result: 1: c:\projects\test\CallerInfo\CallerInfo.cs(5):Main - Hello Logger! However, though this seems the same, there are a few key differences. First of all, there are only 3 supported attributes (at this time) that give you the file path, line number, and calling member.  Thus, it does not give you as rich of detail as a StackFrame (which can give you the calling type as well and deeper frames, for example).  Also, these are supported through optional parameters, which means we could call our new Log method like this: 1: // They're defaults, why not fill 'em in 2: Log("My message.", "Some member", "Some file", -13); In addition, since these attributes require optional parameters, they cannot be used in properties, only in methods. These caveats aside, they do let you get similar information inside of methods at a much greater speed!  How much greater?  Well lets crank through 1,000,000 iterations of each.  instead of logging to console, I’ll return the formatted string length of each.  Doing this, we get: 1: Time for 1,000,000 iterations with StackTrace: 5096 ms 2: Time for 1,000,000 iterations with Attributes: 196 ms So you see, using the attributes is much, much faster!  Nearly 25x faster in fact.  Summary There are a few ways to get caller information for a method.  The StackFrame allows you to get a comprehensive set of information spanning the whole call stack, but at a heavier cost.  On the other hand, the attributes allow you to quickly get at caller information baked in at compile-time, but to do so you need to create optional parameters in your methods to support it. Technorati Tags: Little Wonders,CSharp,C#,.NET,StackFrame,CallStack,CallerFilePathAttribute,CallerLineNumberAttribute,CallerMemberName

    Read the article

  • EM CLI, diving in and beyond!

    - by Maureen Byrne
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Doing more in less time… Isn’t that what we all strive to do? With this in mind, I put together two screen watches on Oracle Enterprise Manager 12c command line interface, or EM CLI as it is also known. There is a wealth of information on any topic that you choose to read about, from manual pages to coding documents…might I even say blog posts? In our busy lives it is so nice to just sit back with a short video, watch and learn enough to dive in. Doing more in less time, is the essence of EM CLI. It enables you to script fundamental and complex administrative tasks in an elegant way, thanks to the Jython scripting language. Repetitive tasks can be scripted and reused again and again. Sure, a Graphical User Interface provides a more intuitive step by step approach to tasks, and it provides a way of quickly becoming familiar with a product and its many features, and it is definitely the way to go when viewing performance data and historical trending…but for repetitive and complex tasks, scripting is the way to go! Lets us take the everyday task of creating an administrator. Using EM CLI in interactive mode the command could look like this.. emcli>create_user(name='jan.doe', type='EXTERNAL_USER') This command creates an administrator called jan.doe which is an externally authenticated user, possibly LDAP or SSO, defined by the EXTERNAL_USER tag. The create_user procedure takes many arguments; see the documentation for more information. Now, where EM CLI really shines and shows power is in creating multiple users. Regardless of the number, tens or thousands, the effort is the same. With the use of a standard programming construct, a loop, you can place your create_user() procedure within it. Using a loop allows you to iterate through a previously created list, creating new users until the list is complete. Using EM CLI in Script mode, your Jython loop would look something like this… for user in list_of_users:       create_user(name=user, expire=’true’, password=’welcome123’) This Jython code snippet iterates through a previously defined list of names, list_of_users, and iterates through the list, taking each name, user in this case, and creates an administrator sets the password to welcome123, but forces the user to reset it when they first login. This is only one of over four hundred procedures created to expose Oracle Enterprise Manager 12c functionality in a powerful and programmatic way. It is a few months since we released EM CLI with scripting option. We are seeing many users adapt to this fun and powerful way of using Oracle Enterprise Manager 12c. What are the first steps? Watch these screen watches, and dive in. The first screen watch steps you through where and how to download and install and how to run your first few commands. The Second screen watch steps you through a few scripts. Next time, I am going to show you the basic building blocks to writing a Jython script to perform Oracle Enterprise Manager 12c administrative tasks. Join this growing group of EM CLI users…. Dive in! Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Cannot connect to Open WiFi hotspot created by Android

    - by Bibhas
    I'm trying to share my 3G data connection via WiFi hotspot. I have an open Hotspot running on my phone(Xperia Neo V - MT11i - Android 2.3.4). But I cannot connect to it from my Ubuntu system. Here is the syslog while I try to connect to it - NetworkManager[1077]: <info> Activation (wlan0) starting connection 'TheNeo' NetworkManager[1077]: <info> (wlan0): device state change: disconnected -> prepare (reason 'none') [30 40 0] NetworkManager[1077]: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) scheduled... NetworkManager[1077]: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) started... NetworkManager[1077]: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) scheduled... NetworkManager[1077]: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) complete. NetworkManager[1077]: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) starting... NetworkManager[1077]: <info> (wlan0): device state change: prepare -> config (reason 'none') [40 50 0] NetworkManager[1077]: <info> Activation (wlan0/wireless): connection 'TheNeo' requires no security. No secrets needed. NetworkManager[1077]: <info> Config: added 'ssid' value 'TheNeo' NetworkManager[1077]: <info> Config: added 'scan_ssid' value '1' NetworkManager[1077]: <info> Config: added 'key_mgmt' value 'NONE' NetworkManager[1077]: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) complete. NetworkManager[1077]: <info> Config: set interface ap_scan to 1 NetworkManager[1077]: <info> (wlan0): supplicant interface state: inactive -> scanning wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) NetworkManager[1077]: <info> (wlan0): supplicant interface state: scanning -> authenticating kernel: [17498.113553] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17498.310138] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17498.510069] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17498.710083] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17504.779927] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17504.976420] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17505.176379] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17505.376314] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17511.478385] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17511.674738] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17511.874655] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17512.074659] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17518.152643] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17518.349064] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17518.549051] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17518.748999] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17524.858896] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17525.055404] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17525.255387] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17525.455254] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17531.589176] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17531.785747] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17531.985724] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17532.185610] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17538.329257] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17538.528003] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17538.728024] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17538.927922] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17545.022036] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17545.218339] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17545.418319] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17545.618206] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out wpa_supplicant[29352]: Trying to authenticate with 5c:b5:24:2f:d1:2f (SSID='TheNeo' freq=2462 MHz) kernel: [17551.724177] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 1/3) kernel: [17551.920685] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 2/3) kernel: [17552.120597] wlan0: direct probe to 5c:b5:24:2f:d1:2f (try 3/3) kernel: [17552.320526] wlan0: direct probe to 5c:b5:24:2f:d1:2f timed out NetworkManager[1077]: <warn> Activation (wlan0/wireless): association took too long, failing activation. NetworkManager[1077]: <info> (wlan0): device state change: config -> failed (reason 'supplicant-timeout') [50 120 11] NetworkManager[1077]: <warn> Activation (wlan0) failed for access point (TheNeo) NetworkManager[1077]: <warn> Activation (wlan0) failed. NetworkManager[1077]: <info> (wlan0): device state change: failed -> disconnected (reason 'none') [120 30 0] NetworkManager[1077]: <info> (wlan0): deactivating device (reason 'none') [0] NetworkManager[1077]: <info> (wlan0): supplicant interface state: authenticating -> disconnected NetworkManager[1077]: <warn> Couldn't disconnect supplicant interface: This interface is not connected. Why is direct probe to 5c:b5:24:2f:d1:2f timed out? Any idea?

    Read the article

  • No GRUB Screen or recovery mode on Boot after 12.04 Upgrade

    - by Nick
    I tried the live boot CD and boot-repair, also loaded the Desktop install CD, and it looks like all partitions check out OK. However, when I try to boot Linux (the only bootable partition on the computer) I get a blank screen. Every so often the screen give me something akin to: Assuming write through cache Asking for cache data failed it appears to start booting, then hangs. Ctrl+Alt+Delete shuts down the machine The last message during boot is "STarting TiMidity++ ALSA midi emulation... [OK]" I used boot-repair to generate a boot info report. One thing looks odd to me- it reports a missing core.img on /dev/sda1. Here is the full info: Boot Info Script 0.61.full + Boot-Repair extra info [Boot-Info August 2nd 2012] ============================= Boot Info Summary: =============================== = Grub2 (v1.99) is installed in the MBR of /dev/sda and looks at sector 1 of the same hard drive for core.img. core.img is at this location and looks for (,msdos1)/boot/grub on this drive. = Windows is installed in the MBR of /dev/sdb. sda1: __________________________________________ File system: ext4 Boot sector type: Grub2 (v1.99) Boot sector info: Grub2 (v1.99) is installed in the boot sector of sda1 and looks at sector 18406911 of the same hard drive for core.img, but core.img can not be found at this location. Operating System: Ubuntu 12.04.1 LTS Boot files: /boot/grub/grub.cfg /etc/fstab /boot/extlinux/extlinux.conf /boot/grub/core.img sda2: __________________________________________ File system: Extended Partition Boot sector type: - Boot sector info: sda5: __________________________________________ File system: swap Boot sector type: - Boot sector info: sdb1: __________________________________________ File system: ntfs Boot sector type: Windows XP: NTFS Boot sector info: No errors found in the Boot Parameter Block. Operating System: Boot files: ============================ Drive/Partition Info: ============================= Drive: sda _______________________________________ Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sda1 * 63 307,339,514 307,339,452 83 Linux /dev/sda2 307,339,515 312,576,704 5,237,190 5 Extended /dev/sda5 307,339,578 312,576,704 5,237,127 82 Linux swap / Solaris Drive: sdb _______________________________________ Disk /dev/sdb: 320.1 GB, 320072933376 bytes 255 heads, 63 sectors/track, 38913 cylinders, total 625142448 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes Partition Boot Start Sector End Sector # of Sectors Id System /dev/sdb1 2,048 625,142,447 625,140,400 7 NTFS / exFAT / HPFS "blkid" output: ____________________________________ Device UUID TYPE LABEL /dev/loop0 squashfs /dev/sda1 11b4d633-7863-40b2-a6ca-da5f82c3ad0b ext4 /dev/sda5 cb8d65f4-8cf9-4088-b804-e3dea2151033 swap /dev/sdb1 349E7C109E7BC8BE ntfs Personal1 ================================ Mount points: ================================= Device Mount_Point Type Options /dev/sdb1 /media/Personal1 fuseblk (rw,nosuid,nodev,allow_other,blksize=4096,default_permissions) /dev/sr0 /live/image iso9660 (ro,noatime) ...(a bunch of config file info- let me know if anyone wants to see it!) But usually I just get "Cannot Display This Video Mode", which I know means the video output is not usable by the monitor. I'm looking for a way to get into a recovery mode.I'd really like to avoid wiping the drive. Any thoughts?

    Read the article

  • Generation 4 Modular Data Center

    - by kaleidoscope
    Microsoft’s launched Generation 4 Modular Data Center design at the PDC 09 - The 20-foot container built on container-based model. Microsoft says the use of server-packed containers – known as Pre-Assembled Components (PACs) – will allow it to slash the cost of building its new data centers. Completely optimized for outdoor use, with a design that relies upon fresh air (”free cooling”) rather than air conditioning. Its exterior is designed to draw fresh air into the cold aisle and expel hot air from the rear of the hot aisle. More details can be found at: http://www.datacenterknowledge.com/archives/2009/11/18/microsofts-windows-azure-cloud-container/   Rituraj, J

    Read the article

  • jQuery Templates, Data Link

    - by Renso
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Query Templates, Data Link, and Globalization I am sure you must have read Scott Guthrie’s blog post about jQuery support and officially supporting jQuery's templating, data linking and globalization, if not here it is: jQuery Templating Since we are an open source shop and use jQuery and jQuery plugins extensively to say the least, decided to look into the templating a bit and see what data linking is all about. For those not familiar with those terms here is the summary, plenty of material out there on what it is, but here is what in my experience it means: jQuery Templating: A templating engine that allows you to specify a client-side template where you indicate which properties/tags you want dynamically updated. You in a sense specify which parts of the html is dynamic and since it is pluggable you are able to use tools data jQuery data linking and others to let it sync up your template with data. What makes it more powerful is that you can easily work with rows of data, adding and removing rows. Once the template has been generated, which you do dynamically on a client-side event, you then append/inject the resulting template somewhere in your DOM, like for example you would get a JSON object from the database, map it to your template, it populates the template with your data in the indicated places, and then let’s say for example append it to a row in a table. I have not found it that useful for lets say a single record of data since you could easily just get a partial view from the server via an html type ajax call. It really shines when you dynamically add/remove rows from a list in the DOM. I have not found an alternative that meets the functionality of the jQuery template and helps of course that Microsoft officially supports it. In future versions of the jQuery plug-in it may even ship as part of the standard jQuery library and with future versions of Visual Studio. jQuery Data Linking: In short I was fascinated by it initially by how with one line of code I can sync up my JSON object with my form elements. That's where my enthusiasm stopped. It was one-line to let is deal with syncing up your form with your JSON object, but it is not bidirectional as they state and I tried all the work arounds they suggested and none of them work. The problem is that when you update your JSON object it DOES NOT sync it up with your form. In an example, accounts are being edited client side by selecting the account from a list by clicking on the row, it then fetches the entire account JSON object via ajax json-type call and then refreshes the form with the account’s details from the new JSON object. What is the use of syncing up my JSON with the form if I still have to programmatically sync up my new JSON object with each DOM property?! So you may ask: “what is the alternative”? Good question and the same one I was pondering, maybe I can just use it for keeping my from n sync with my JSON object so I can post that JSON object back to the server and update my database. That’s when I discovered Knockout: Knockout It addresses the issues mentioned above and also supports event handling through the observer pattern. Not wanting to go into detail here, Steve Sanderson, the creator of Knockout, has already done a terrific job of that, thanks Steve for a great plug-in! Best of all it integrates perfectly with the jQuery Templating engine as well. I have not found an alternative to this plugin that supports the depth and width of functionality and would recommend it to anyone. The only drawback is the embedded html attributes (data-bind=””) tags that you have to add to the HTML, in my opinion tying your behavior to your HTML, where I like to separate behavior from HTML as well as CSS, so the HTML is purely to define content, not styling or behavior. But there are plusses to this as well and also a nifty work around to this that I will just shortly mention here with an example. Instead of data binding an html tag with knockout event handling like so:  <%=Html.TextBox("PrepayDiscount", String.Empty, new { @class = "number" })%>   Do: <%=Html.DataBoundTextBox("PrepayDiscount", String.Empty, new { @class = "number" })%>   The html extension above then takes care of the internals and you could then swap Knockout for something else if you want to inside the extension and keep the HTML plugin agnostic. Here is what the extension looks like, you can easily build a whole library to support all kinds of data binding options from this:      public static class HtmlExtensions       {         public static MvcHtmlString DataBoundTextBox(this HtmlHelper helper, string name, object value, object htmlAttributes)         {             var dic = new RouteValueDictionary(htmlAttributes);             dic.Add("data-bind", String.Format("value: {0}", name));             return helper.TextBox(name, value, dic);         }       }   Hope this helps in making a decision when and where to consider jQuery templating, data linking and Knockout.

    Read the article

  • What developer conferences are you going to this year?

    - by mbcrump
    This short list is what I consider to be the “cream-of-the-crop” in developer conferences. This is also a list of the conferences that I plan on attending in 2011. If you feel your conference is just as good, then shoot me an email at [michael[at]michaelcrump[dot]net, and if possible I will check it out.   In-Person Event Las Vegas on April 18th-22nd, 2011 Redmond on October 17th-21st, 2011 Orlando on December 5th-9th, 2011 Visual Studio Live – I attended this event in November of last year and blogged about my experience. I am also planning on going back to the Orlando session in December of this year. So what did I like the most about this event? Being able to interact one-on-one with a majority of the speakers. If you read my blog post then you will see a list of the speakers that I met up with. I also made a lot of great connections with other professional developers all over the world. They are having an event in Las Vegas on April 18th-22nd. I noticed at this event that they have added a new track on mobile. Being a big fan of mobile, I feel that this is a great move. They also have a great selection for Silverlight Developers including Billy Hollis and Rocky Lhotka. For the full lineup of conference tracks, sessions and speakers visit http://bit.ly/VSLiveTrks. If you are interested in this then you can register here by February 16th. I must add that you can save $300 bucks by getting the early-bird special.   Virtual Conference SSWUG (DBTechCon) - holds the largest virtual conference in the information technology industry. It is also special to me because they selected a majority of my Silverlight content for the April conference. No traveling fees and all of the sessions are recorded so you can watch them on-demand for $189 bucks (early-bird special). For the entire speaker list then click here. The session list has also been published. If you are interested in this then you can register here.   In-Person Event Knoxville, TN on June 3rd/4th 2011. Codestock.org – If you live in the South then you have heard of CodeStock. To my knowledge, they have only had 3 events so far and they were a huge success. It was such a success that after the last event, everyone was telling me how good it was and how much they enjoyed it. They currently have a call for speakers going on right now, so if you have sessions then be sure to submit yours. So, what makes them stand out? Well for starters Michael Neal (organizer) developed an open API so conference attendees could build their own apps for the sessions. They also encouraged their speakers to go to other sessions instead of stay in a “speaker-room”. Another cool feature is that they are uploading videos from the conference so everyone can benefit. They are currently looking for sponsorship, so help out if you can.   In-Person Event Redmond, WA on October 28/29 2011 *NOT 100% SURE AT THIS POINT* PDC 11 – OK, so the logo should be pdc11 but its not out yet. This event is located on Microsoft’s campus in Redmond, WA. It is probably one of the most well known conferences for developers to attend. One of the big perks from this event is that you typically come away with free stuff. In 2010 they gave away Windows 7 Phones. I remember years earlier they gave away laptops. This of course isn’t the only reason to go, you may get to tour the Microsoft campus. Since pdc is a huge event, you can view all the events for free. Mike Taulty created a nice Silverlight application that consumes the OData feed. You can download it here. If everything goes as planned, I will be at all of these events. If you plan on going then send me a tweet and we will do lunch or dinner. I love meeting new developers and talking .net.  Subscribe to my feed

    Read the article

< Previous Page | 370 371 372 373 374 375 376 377 378 379 380 381  | Next Page >