Search Results

Search found 19446 results on 778 pages for 'network printer'.

Page 306/778 | < Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >

  • CodePlex Daily Summary for Wednesday, February 24, 2010

    CodePlex Daily Summary for Wednesday, February 24, 2010New ProjectsADO.Net DataSets to ExtJs.data.Store: A JavaScript (and C#) based project to reduce the amount of client-side code necessary to consume ADO.Net / ASP.Net web services when using ExtJS.AMP.Net Wrapper: AMP is a platform to build on-line marketplaces (http://www.poweredbyamp.com). AMP.Net provided Object-Like interaction with AMP's restful service...ArkSwitch: ArkSwitch is an easy to use, finger-friendly task manager for Windows Mobile 6.5.3 (with a WM6.5 compatibility mode). It is developed mainly in C#,...Biffen: Cinema-booking project in Computer Science at University College Nordjylland, Denmark.Braintree Client Library: Client library for integrating with the Braintree Gateway.Business Framework: A framework which helps building business applications. It provides business rules, validation rules and a text-based language for writing rules. I...Camp Araminta: This project will be used to coordinate development efforts on the Camp Araminta website.ChoServiceHost: Simple and easy way to create and host Windows Service Applications in .NET 3.5/Visual Studio 2008Delta College Game Development Project: Project site for cs 16 game development classDotNetNuke® Labs: DotNetNuke Labs is a collection of "research & development" type projects for the DotNetNuke platform.Generic web part for hosting Silverlight content on SharePoint sites (WSS,MOSS): This is a generic web part for hosting Silverlight content on WSS 30 and MOSS 2007 sites. The objective of this web part was to make it easy for us...GpTiming: GpTiming is a simple "lab" application related to race events, based on a Domain Model.HTML Forms in Windows Forms: As the names suggests this code library is designed to introduce HTML code (primarily form code) into Windows Forms. It was created because standar...imgur uploader - .net open source uploader for image sharing site imgur: Imgur uploader strives to be an easy to use uploader for images you would like to share with friends and family. It is written in c#.kuuy static system: kuuy static system is a full static publish website system!LaTeX Grapher: The goal of this project is to make a tool that facilitates making high quality two dimensional vector graphic function plots with a minimal amount...LightREST: A .NET library to consume REST-based HTTP services.Machiavelli: Machiavelli is Stackoverflow inspired project that I am working on following Andrew Siemer's article on DotNetSlackers. Mover: Mover makes it easier for developers to create programmatic animations in Silverlight. It provides an expressive API to the platform's underlying S...MVC Presenter: ASP.NET MVC 2で作るプレゼンビューアーnHibernate Attribute mapping: How to use Attibute mapping with a ManyToMany Relationship with nHibernateNIPO Data Processing Component Framework: NIPO is a general purpose component framework for data processing applications (that follow the IPO-principle). Its plugin-based architecture makes...PowerShell Remote File Explorer: This project intends to develop a Windows forms based file explorer to browse/transfer files over PowerShell 2.0 remoting channel. The file transfe...Process Flow Tracking of Biomass Distribution Project (University of Mumbai): At Larsen & Toubro Infotech India Ltd., my team worked on a SCM (Supply Chain Management) based project titled 'Process Flow Tracking of Biomass Di...VS2010 Rc1 Fix: Illustrates a fix for working with the ASAP.NET Wizard control with VS2010 RC1Yicker: a microblog program devolep by c#.New ReleasesADO.Net DataSets to ExtJs.data.Store: Ext.net: This is the first version of Ext.net. This version contains a single class, Ext.net.Store which extends the Ext.data.Store class to consume ADO.Ne...AMP.Net Wrapper: AMP.Net v1.0: Provides abstraction for all the product search functionality offered by AMP.ArkSwitch: ArkSwitch legacy versions: Old versions - no need to download themArkSwitch: ArkSwitch v1.1.0: ArkSwitch v1.1.0Braintree Client Library: Braintree 1.0.0: Braintree .NET client library 1.0.0Business Framework: BusinessFramework preview: Early preview bits. See Rules for a sample.Business Framework: Samples: SamplesCC.Votd: CC.Votd 1.0.10.224: This is the initial release of CC.Votd. Marking as beta since I'm the only one who has used it up to this point.ChoServiceHost: ChoServiceHost.msi: Easy way to develop Windows Service applications in .NET 3.5/VS.NET 2008. (Installer)ChoServiceHost: ChoServiceHost-Src.zip: Easy way to develop Windows Service applications in .NET 3.5/VS.NET 2008. (Source Files)CHS Extranet: Beta 2.4: Beta 2.4 Release: Change Log: Added HTML preview options for XLS, XLSX, DOCX File Changes: ~/MyComputer.aspx ~/mycomputer.css ~/basestyle.css...Composure: AvalonDock-55751-VS2010.NET4: This is a "convenience build" of AvalonDock (drop 55751) for VIsual Studio 2010 and .NET 4.0. Nothing has been altered in the source code (which ...Data Access Component: Version 2.6: Add LINQ support.Desktop Google Reader: 1.3 Beta 1: New features: Read it Later included (see http://readitlaterlist.com/) Liking added (working: see number of liking users, see if liking yourself,...Explorer Plus: Explorer Plus v0.3: Amazon Locales AddedFree Silverlight & WPF Chart Control - Visifire: Visifire SL and WPF Charts 3.0.3 Released: Hi, Today we have released the final version of Visifire v3.0.3 which contains the following major features: * DataBinding. * IndicatorEn...Generic web part for hosting Silverlight content on SharePoint sites (WSS,MOSS): CTP: The objective of this release was to gather feedback from the wider community. I intend to pursue further development and make fixes wherever appro...HTML Forms in Windows Forms: HTMLForms 1.0: First Release.imgur uploader - .net open source uploader for image sharing site imgur: Release 2010-02-23-01: This is the first codeplex release! Let mayhem commence...Jeremi Stadler: Stick Tops 2.5: Sticktops is a very light program that makes it easy to paste stuff on small notes on the screen. All notes you have is saved on a server so you ca...kuuy static system: kss_v1.0beta sql: kss_v1.0beta sql scripts sourceMDownloader: MDownloader-0.15.2.55998: Fixed detecting uploading.com dead links; Added hiding rss entries without files;Mover: MoverLib for Silverlight 3: A first version of MoverLib for Silverlight 3.nHibernate Attribute mapping: 1.0: Source CodenHibernate Attribute mapping: Download 1: Zip fileNodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.113: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.113: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version inclu...OAuthLib: OAuthLib (1.6.0.0): Difference between previous version is as next. 7079 Make it possible to pass factory method of request in ObtainUnauthorizedRequestToken and Reque...patterns & practices SharePoint Guidance: SPG2010 Drop 5: SharePoint Guidance Drop Notes Microsoft patterns and practices ****************************************** ***************************************...PowerShell Remote File Explorer: PSRemoteExplorer 0.1: This release is the initial release of PowerShell remote file explorer. This enables the basic functionality of a remote file explorer. This also p...Reusable Library: v1.0.3: A collection of reusable abstractions for enterprise application developer.SharePoint Outlook Connector: Version 1.0.2.4: Version 1.0.2.4 Minor bugs have been fixed.Silverlight Server File Manager: First production release: This release is in production. Release on change set 37268.SIMD Detector: 2nd Release: Released C/CLI assembly project for use in CSharp and VB. Tested in CSharp console application. A Windows Form application coming soon. Projects ma...Source Analysis Policy: Source Analysis Policy v1.1 SP1: This release contains the compiled, and signed binaries in an installation package. This package also registers the policy with Microsoft Visual St...SpecExpress : A Fluent Validation Framework: SpecExpress 1.1: UpdatesAdded Validation Contexts feature Fixed bug with handling for Bool Types and Required MessageStore now allows for overriding individual ...VCC: Latest build, v2.1.30223.0: Automatic drop of latest buildVS2010 Rc1 Fix: RC1Fix01: This is a very simple project implementing a Microsoft Walkthrough at http://msdn.microsoft.com/en-us/library/wdb4eb30%28VS.100%29.aspx and the man...WPF AutoComplete TextBox Control: version 1.0: Initial releaseMost Popular ProjectsASP.NET Ajax LibraryManaged Extensibility FrameworkAccelerators for Microsoft Dynamics CRMWindows 7 USB/DVD Download ToolDotNetZip LibraryMDownloaderVirtual Router - Wifi Hot Spot for Windows 7 / 2008 R2MFCMAPIDroid ExplorerUseful Sharepoint Designer Custom Workflow ActivitiesMost Active ProjectsDinnerNow.netRawrBlogEngine.NETInfoServiceNB_Store - Free DotNetNuke Ecommerce Catalog ModuleRapid Entity Framework. (ORM). CTP 2SharpMap - Geospatial Application Framework for the CLRjQuery Library for SharePoint Web Servicespatterns & practices – Enterprise LibraryXcoordination Application Space

    Read the article

  • Oracle Linux 6 DVDs Now Available

    - by sergio.leunissen
    On Sunday 6 February 2011, Oracle Linux 6 was released on the Unbreakable Linux Network for customers with an Oracle Linux support subscription. Shortly after that, the Oracle Linux 6 RPMs were made available on our public yum server. Today we published the installation DVD images on edelivery.oracle.com/linux. Oracle Linux 6 is free to download, install and use. The full release notes are here, but similar to my recent post about Oracle Linux 5.6, I wanted to highlight a few items about this release. Unbreakable Enterprise Kernel As is the case with Oracle Linux 5.6, the default installed kernel on x86_64 platform in Oracle Linux 6 is the Unbreakable Enterprise Kernel. If you haven't already, I highly recommend you watch the replay of this webcast by Chris Mason on the performance improvements made in this kernel. # uname -r 2.6.32-100.28.5.el6.x86_64 The Unbreakable Enterprise Kernel is delivered via the package kernel-uek: [root@localhost ~]# yum info kernel-uek ... Installed Packages Name : kernel-uek Arch : x86_64 Version : 2.6.32 Release : 100.28.5.el6 Size : 84 M Repo : installed From repo : anaconda-OracleLinuxServer-201102031546.x86_64 Summary : The Linux kernel URL : http://www.kernel.org/ License : GPLv2 Description: The kernel package contains the Linux kernel (vmlinuz), the core of : any Linux operating system. The kernel handles the basic functions : of the operating system: memory allocation, process allocation, : device input and output, etc. ext4 file system The ext4 or fourth extended filesystem replaces ext3 as the default filesystem in Oracle Linux 6. # mount /dev/mapper/VolGroup-lv_root on / type ext4 (rw) proc on /proc type proc (rw) sysfs on /sys type sysfs (rw) devpts on /dev/pts type devpts (rw,gid=5,mode=620) tmpfs on /dev/shm type tmpfs (rw,rootcontext="system_u:object_r:tmpfs_t:s0") /dev/sda1 on /boot type ext4 (rw) none on /proc/sys/fs/binfmt_misc type binfmt_misc (rw) Red Hat compatible kernel Oracle Linux 6 also includes a Red Hat compatible kernel built directly from RHEL source. It's already installed, so booting it is a matter of editing /etc/grub.conf # rpm -qa | grep kernel-2.6.32 kernel-2.6.32-71.el6.x86_64 Oracle Linux 6 no longer includes a Red Hat compatible kernel with Oracle bug fixes. The only Red Hat compatible kernel included is the one built directly from RHEL source. Yum-only access to Unbreakable Linux Network (ULN) Oracle Linux 6 uses yum exclusively for access to Unbreakable Linux Network. To register your system with ULN, use the following command: # uln_register No Itanium Support Oracle Linux 6 is not supported on the Itanium (ia64) platform. Next Steps Read the release notes Download Oracle Linux 6 for free Discuss on the Oracle Linux forum

    Read the article

  • Enable Media Streaming in Windows Home Server to Windows Media Player

    - by Mysticgeek
    One of the cool features of Windows Home Server is the ability to stream photos, music, and video to other computers on your network. Today we take a look at how to enable streaming in WHS to Windows Media Player in Vista and Windows 7. Turn on Media Streaming on WHS To enable Media Streaming from Windows Home Server, open the Windows Home Server Console and click on Settings. Now in the Setting screen select Media Sharing, then in the right column under Media Library Sharing turn on Library Sharing for the folders you want to stream.   If you have a Windows 7 machine on your network make sure media streaming is enabled. You should then see the server under Other Libraries and can start streaming your media collection.   Stream Video to Media Player 11 Now let’s say you want to stream videos to another member of your household who’s using a Vista machine in another room through Windows Media Player 11. Open WMP and click on Library then Media Sharing. Now click the box next to Find media that others are sharing then click Ok. Now you should see the server listed under Library…where in this example it’s geekserver. Since we only enabled Video streaming for this example, we need to click on the category icon and select Video. Now you can scroll through the available videos… And start enjoying your favorite videos streamed from the server through WMP 11 on Vista. Of course you can use this method to stream photos and music as well, you just need to enable what you want to stream from the Home Server Console. You can also stream your media to Windows Media Center and Xbox which we will be covering soon. Similar Articles Productive Geek Tips Share Digital Media With Other Computers on a Home Network with Windows 7Fixing When Windows Media Player Library Won’t Let You Add FilesGMedia Blog: Setting Up a Windows Home ServerShare and Stream Digital Media Between Windows 7 Machines On Your Home NetworkInstalling Windows Media Player Plugin for Firefox TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Need to Come Up with a Good Name? Try Wordoid StockFox puts a Lightweight Stock Ticker in your Statusbar Explore Google Public Data Visually The Ultimate Excel Cheatsheet Convert the Quick Launch Bar into a Super Application Launcher Automate Tasks in Linux with Crontab

    Read the article

  • Ubuntu 10.10 Mouse and Keyboard Freeze

    - by Kev
    I installed Ubuntu 10.10 today and have had mouse problem since. Symptoms: At some arbitrary point in time (frequency: 2-3 times per hour), the mouse and keyboard stops working for ever(may be). I start System monitor, I found out network was shutdown just before mouse freeze. Some time my keyboard keep typing one key. For example:77777777777777777777777777777777777777777777777777777.....(it keep typing for 20 sec) I found out a script just solve the freeze problem:(I hit Powerbutton) -----------------/etc/acpi/powerbtn.sh------------------------ event=button[ /]power action=/usr/sbin/fix_mouse.sh -----------------/usr/sbin/fix_mouse.sh------------------------ rmmod psmouse modprobe psmouse Yesterday I install Ubuntu 10.04 FAILED also have mouse problem. When I switch back to Windows XP. The network card is down. It kept connecting and disconnecting 1 time per sec. CPU: i5 Motherboard: ASUS P7P55D OS: Windows XP + Ubuntu 10.10 Video Card: ATI 5770 Mouse,Keyboard: PS/2

    Read the article

  • Migration from Exchange to BPOS - Microsoft Assessment and Planning (MAP) Toolkit Link

    - by Harish Pavithran
    The Microsoft Assessment and Planning (MAP) Toolkit is an agentless toolkit that finds computers on a network and performs a detailed inventory of the computers using Windows Management Instrumentation (WMI) and the Remote Registry Service. The data and analysis provided by this toolkit can significantly simplify the planning process for migrating to Windows® 7, Windows Vista®, Microsoft Office 2007, Windows Server® 2008 R2, Windows Server 2008, Hyper-V, Microsoft Application Virtualization, Microsoft SQL Server 2008, and Forefront® Client Security and Network Access Protection. Assessments for Windows Server 2008 R2, Windows Server 2008, Windows 7, and Windows Vista include device driver availability as well as recommendations for hardware upgrades. If you are interested in server virtualization planning, MAP provides the ability to gather performance metrics from computers you are considering for virtualization and a feature to model a library of potential host hardware and storage configurations. This information can be used to quickly perform "what-if" analysis using Hyper-V and Microsoft Virtual Server 2005 R2 as virtualization platforms. http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=67240b76-3148-4e49-943d-4d9ea7f77730

    Read the article

  • Toorcon14

    - by danx
    Toorcon 2012 Information Security Conference San Diego, CA, http://www.toorcon.org/ Dan Anderson, October 2012 It's almost Halloween, and we all know what that means—yes, of course, it's time for another Toorcon Conference! Toorcon is an annual conference for people interested in computer security. This includes the whole range of hackers, computer hobbyists, professionals, security consultants, press, law enforcement, prosecutors, FBI, etc. We're at Toorcon 14—see earlier blogs for some of the previous Toorcon's I've attended (back to 2003). This year's "con" was held at the Westin on Broadway in downtown San Diego, California. The following are not necessarily my views—I'm just the messenger—although I could have misquoted or misparaphrased the speakers. Also, I only reviewed some of the talks, below, which I attended and interested me. MalAndroid—the Crux of Android Infections, Aditya K. Sood Programming Weird Machines with ELF Metadata, Rebecca "bx" Shapiro Privacy at the Handset: New FCC Rules?, Valkyrie Hacking Measured Boot and UEFI, Dan Griffin You Can't Buy Security: Building the Open Source InfoSec Program, Boris Sverdlik What Journalists Want: The Investigative Reporters' Perspective on Hacking, Dave Maas & Jason Leopold Accessibility and Security, Anna Shubina Stop Patching, for Stronger PCI Compliance, Adam Brand McAfee Secure & Trustmarks — a Hacker's Best Friend, Jay James & Shane MacDougall MalAndroid—the Crux of Android Infections Aditya K. Sood, IOActive, Michigan State PhD candidate Aditya talked about Android smartphone malware. There's a lot of old Android software out there—over 50% Gingerbread (2.3.x)—and most have unpatched vulnerabilities. Of 9 Android vulnerabilities, 8 have known exploits (such as the old Gingerbread Global Object Table exploit). Android protection includes sandboxing, security scanner, app permissions, and screened Android app market. The Android permission checker has fine-grain resource control, policy enforcement. Android static analysis also includes a static analysis app checker (bouncer), and a vulnerablity checker. What security problems does Android have? User-centric security, which depends on the user to grant permission and make smart decisions. But users don't care or think about malware (the're not aware, not paranoid). All they want is functionality, extensibility, mobility Android had no "proper" encryption before Android 3.0 No built-in protection against social engineering and web tricks Alternative Android app markets are unsafe. Simply visiting some markets can infect Android Aditya classified Android Malware types as: Type A—Apps. These interact with the Android app framework. For example, a fake Netflix app. Or Android Gold Dream (game), which uploads user files stealthy manner to a remote location. Type K—Kernel. Exploits underlying Linux libraries or kernel Type H—Hybrid. These use multiple layers (app framework, libraries, kernel). These are most commonly used by Android botnets, which are popular with Chinese botnet authors What are the threats from Android malware? These incude leak info (contacts), banking fraud, corporate network attacks, malware advertising, malware "Hackivism" (the promotion of social causes. For example, promiting specific leaders of the Tunisian or Iranian revolutions. Android malware is frequently "masquerated". That is, repackaged inside a legit app with malware. To avoid detection, the hidden malware is not unwrapped until runtime. The malware payload can be hidden in, for example, PNG files. Less common are Android bootkits—there's not many around. What they do is hijack the Android init framework—alteering system programs and daemons, then deletes itself. For example, the DKF Bootkit (China). Android App Problems: no code signing! all self-signed native code execution permission sandbox — all or none alternate market places no robust Android malware detection at network level delayed patch process Programming Weird Machines with ELF Metadata Rebecca "bx" Shapiro, Dartmouth College, NH https://github.com/bx/elf-bf-tools @bxsays on twitter Definitions. "ELF" is an executable file format used in linking and loading executables (on UNIX/Linux-class machines). "Weird machine" uses undocumented computation sources (I think of them as unintended virtual machines). Some examples of "weird machines" are those that: return to weird location, does SQL injection, corrupts the heap. Bx then talked about using ELF metadata as (an uintended) "weird machine". Some ELF background: A compiler takes source code and generates a ELF object file (hello.o). A static linker makes an ELF executable from the object file. A runtime linker and loader takes ELF executable and loads and relocates it in memory. The ELF file has symbols to relocate functions and variables. ELF has two relocation tables—one at link time and another one at loading time: .rela.dyn (link time) and .dynsym (dynamic table). GOT: Global Offset Table of addresses for dynamically-linked functions. PLT: Procedure Linkage Tables—works with GOT. The memory layout of a process (not the ELF file) is, in order: program (+ heap), dynamic libraries, libc, ld.so, stack (which includes the dynamic table loaded into memory) For ELF, the "weird machine" is found and exploited in the loader. ELF can be crafted for executing viruses, by tricking runtime into executing interpreted "code" in the ELF symbol table. One can inject parasitic "code" without modifying the actual ELF code portions. Think of the ELF symbol table as an "assembly language" interpreter. It has these elements: instructions: Add, move, jump if not 0 (jnz) Think of symbol table entries as "registers" symbol table value is "contents" immediate values are constants direct values are addresses (e.g., 0xdeadbeef) move instruction: is a relocation table entry add instruction: relocation table "addend" entry jnz instruction: takes multiple relocation table entries The ELF weird machine exploits the loader by relocating relocation table entries. The loader will go on forever until told to stop. It stores state on stack at "end" and uses IFUNC table entries (containing function pointer address). The ELF weird machine, called "Brainfu*k" (BF) has: 8 instructions: pointer inc, dec, inc indirect, dec indirect, jump forward, jump backward, print. Three registers - 3 registers Bx showed example BF source code that implemented a Turing machine printing "hello, world". More interesting was the next demo, where bx modified ping. Ping runs suid as root, but quickly drops privilege. BF modified the loader to disable the library function call dropping privilege, so it remained as root. Then BF modified the ping -t argument to execute the -t filename as root. It's best to show what this modified ping does with an example: $ whoami bx $ ping localhost -t backdoor.sh # executes backdoor $ whoami root $ The modified code increased from 285948 bytes to 290209 bytes. A BF tool compiles "executable" by modifying the symbol table in an existing ELF executable. The tool modifies .dynsym and .rela.dyn table, but not code or data. Privacy at the Handset: New FCC Rules? "Valkyrie" (Christie Dudley, Santa Clara Law JD candidate) Valkyrie talked about mobile handset privacy. Some background: Senator Franken (also a comedian) became alarmed about CarrierIQ, where the carriers track their customers. Franken asked the FCC to find out what obligations carriers think they have to protect privacy. The carriers' response was that they are doing just fine with self-regulation—no worries! Carriers need to collect data, such as missed calls, to maintain network quality. But carriers also sell data for marketing. Verizon sells customer data and enables this with a narrow privacy policy (only 1 month to opt out, with difficulties). The data sold is not individually identifiable and is aggregated. But Verizon recommends, as an aggregation workaround to "recollate" data to other databases to identify customers indirectly. The FCC has regulated telephone privacy since 1934 and mobile network privacy since 2007. Also, the carriers say mobile phone privacy is a FTC responsibility (not FCC). FTC is trying to improve mobile app privacy, but FTC has no authority over carrier / customer relationships. As a side note, Apple iPhones are unique as carriers have extra control over iPhones they don't have with other smartphones. As a result iPhones may be more regulated. Who are the consumer advocates? Everyone knows EFF, but EPIC (Electrnic Privacy Info Center), although more obsecure, is more relevant. What to do? Carriers must be accountable. Opt-in and opt-out at any time. Carriers need incentive to grant users control for those who want it, by holding them liable and responsible for breeches on their clock. Location information should be added current CPNI privacy protection, and require "Pen/trap" judicial order to obtain (and would still be a lower standard than 4th Amendment). Politics are on a pro-privacy swing now, with many senators and the Whitehouse. There will probably be new regulation soon, and enforcement will be a problem, but consumers will still have some benefit. Hacking Measured Boot and UEFI Dan Griffin, JWSecure, Inc., Seattle, @JWSdan Dan talked about hacking measured UEFI boot. First some terms: UEFI is a boot technology that is replacing BIOS (has whitelisting and blacklisting). UEFI protects devices against rootkits. TPM - hardware security device to store hashs and hardware-protected keys "secure boot" can control at firmware level what boot images can boot "measured boot" OS feature that tracks hashes (from BIOS, boot loader, krnel, early drivers). "remote attestation" allows remote validation and control based on policy on a remote attestation server. Microsoft pushing TPM (Windows 8 required), but Google is not. Intel TianoCore is the only open source for UEFI. Dan has Measured Boot Tool at http://mbt.codeplex.com/ with a demo where you can also view TPM data. TPM support already on enterprise-class machines. UEFI Weaknesses. UEFI toolkits are evolving rapidly, but UEFI has weaknesses: assume user is an ally trust TPM implicitly, and attached to computer hibernate file is unprotected (disk encryption protects against this) protection migrating from hardware to firmware delays in patching and whitelist updates will UEFI really be adopted by the mainstream (smartphone hardware support, bank support, apathetic consumer support) You Can't Buy Security: Building the Open Source InfoSec Program Boris Sverdlik, ISDPodcast.com co-host Boris talked about problems typical with current security audits. "IT Security" is an oxymoron—IT exists to enable buiness, uptime, utilization, reporting, but don't care about security—IT has conflict of interest. There's no Magic Bullet ("blinky box"), no one-size-fits-all solution (e.g., Intrusion Detection Systems (IDSs)). Regulations don't make you secure. The cloud is not secure (because of shared data and admin access). Defense and pen testing is not sexy. Auditors are not solution (security not a checklist)—what's needed is experience and adaptability—need soft skills. Step 1: First thing is to Google and learn the company end-to-end before you start. Get to know the management team (not IT team), meet as many people as you can. Don't use arbitrary values such as CISSP scores. Quantitive risk assessment is a myth (e.g. AV*EF-SLE). Learn different Business Units, legal/regulatory obligations, learn the business and where the money is made, verify company is protected from script kiddies (easy), learn sensitive information (IP, internal use only), and start with low-hanging fruit (customer service reps and social engineering). Step 2: Policies. Keep policies short and relevant. Generic SANS "security" boilerplate policies don't make sense and are not followed. Focus on acceptable use, data usage, communications, physical security. Step 3: Implementation: keep it simple stupid. Open source, although useful, is not free (implementation cost). Access controls with authentication & authorization for local and remote access. MS Windows has it, otherwise use OpenLDAP, OpenIAM, etc. Application security Everyone tries to reinvent the wheel—use existing static analysis tools. Review high-risk apps and major revisions. Don't run different risk level apps on same system. Assume host/client compromised and use app-level security control. Network security VLAN != segregated because there's too many workarounds. Use explicit firwall rules, active and passive network monitoring (snort is free), disallow end user access to production environment, have a proxy instead of direct Internet access. Also, SSL certificates are not good two-factor auth and SSL does not mean "safe." Operational Controls Have change, patch, asset, & vulnerability management (OSSI is free). For change management, always review code before pushing to production For logging, have centralized security logging for business-critical systems, separate security logging from administrative/IT logging, and lock down log (as it has everything). Monitor with OSSIM (open source). Use intrusion detection, but not just to fulfill a checkbox: build rules from a whitelist perspective (snort). OSSEC has 95% of what you need. Vulnerability management is a QA function when done right: OpenVas and Seccubus are free. Security awareness The reality is users will always click everything. Build real awareness, not compliance driven checkbox, and have it integrated into the culture. Pen test by crowd sourcing—test with logging COSSP http://www.cossp.org/ - Comprehensive Open Source Security Project What Journalists Want: The Investigative Reporters' Perspective on Hacking Dave Maas, San Diego CityBeat Jason Leopold, Truthout.org The difference between hackers and investigative journalists: For hackers, the motivation varies, but method is same, technological specialties. For investigative journalists, it's about one thing—The Story, and they need broad info-gathering skills. J-School in 60 Seconds: Generic formula: Person or issue of pubic interest, new info, or angle. Generic criteria: proximity, prominence, timeliness, human interest, oddity, or consequence. Media awareness of hackers and trends: journalists becoming extremely aware of hackers with congressional debates (privacy, data breaches), demand for data-mining Journalists, use of coding and web development for Journalists, and Journalists busted for hacking (Murdock). Info gathering by investigative journalists include Public records laws. Federal Freedom of Information Act (FOIA) is good, but slow. California Public Records Act is a lot stronger. FOIA takes forever because of foot-dragging—it helps to be specific. Often need to sue (especially FBI). CPRA is faster, and requests can be vague. Dumps and leaks (a la Wikileaks) Journalists want: leads, protecting ourselves, our sources, and adapting tools for news gathering (Google hacking). Anonomity is important to whistleblowers. They want no digital footprint left behind (e.g., email, web log). They don't trust encryption, want to feel safe and secure. Whistleblower laws are very weak—there's no upside for whistleblowers—they have to be very passionate to do it. Accessibility and Security or: How I Learned to Stop Worrying and Love the Halting Problem Anna Shubina, Dartmouth College Anna talked about how accessibility and security are related. Accessibility of digital content (not real world accessibility). mostly refers to blind users and screenreaders, for our purpose. Accessibility is about parsing documents, as are many security issues. "Rich" executable content causes accessibility to fail, and often causes security to fail. For example MS Word has executable format—it's not a document exchange format—more dangerous than PDF or HTML. Accessibility is often the first and maybe only sanity check with parsing. They have no choice because someone may want to read what you write. Google, for example, is very particular about web browser you use and are bad at supporting other browsers. Uses JavaScript instead of links, often requiring mouseover to display content. PDF is a security nightmare. Executible format, embedded flash, JavaScript, etc. 15 million lines of code. Google Chrome doesn't handle PDF correctly, causing several security bugs. PDF has an accessibility checker and PDF tagging, to help with accessibility. But no PDF checker checks for incorrect tags, untagged content, or validates lists or tables. None check executable content at all. The "Halting Problem" is: can one decide whether a program will ever stop? The answer, in general, is no (Rice's theorem). The same holds true for accessibility checkers. Language-theoretic Security says complicated data formats are hard to parse and cannot be solved due to the Halting Problem. W3C Web Accessibility Guidelines: "Perceivable, Operable, Understandable, Robust" Not much help though, except for "Robust", but here's some gems: * all information should be parsable (paraphrasing) * if not parsable, cannot be converted to alternate formats * maximize compatibility in new document formats Executible webpages are bad for security and accessibility. They say it's for a better web experience. But is it necessary to stuff web pages with JavaScript for a better experience? A good example is The Drudge Report—it has hand-written HTML with no JavaScript, yet drives a lot of web traffic due to good content. A bad example is Google News—hidden scrollbars, guessing user input. Solutions: Accessibility and security problems come from same source Expose "better user experience" myth Keep your corner of Internet parsable Remember "Halting Problem"—recognize false solutions (checking and verifying tools) Stop Patching, for Stronger PCI Compliance Adam Brand, protiviti @adamrbrand, http://www.picfun.com/ Adam talked about PCI compliance for retail sales. Take an example: for PCI compliance, 50% of Brian's time (a IT guy), 960 hours/year was spent patching POSs in 850 restaurants. Often applying some patches make no sense (like fixing a browser vulnerability on a server). "Scanner worship" is overuse of vulnerability scanners—it gives a warm and fuzzy and it's simple (red or green results—fix reds). Scanners give a false sense of security. In reality, breeches from missing patches are uncommon—more common problems are: default passwords, cleartext authentication, misconfiguration (firewall ports open). Patching Myths: Myth 1: install within 30 days of patch release (but PCI §6.1 allows a "risk-based approach" instead). Myth 2: vendor decides what's critical (also PCI §6.1). But §6.2 requires user ranking of vulnerabilities instead. Myth 3: scan and rescan until it passes. But PCI §11.2.1b says this applies only to high-risk vulnerabilities. Adam says good recommendations come from NIST 800-40. Instead use sane patching and focus on what's really important. From NIST 800-40: Proactive: Use a proactive vulnerability management process: use change control, configuration management, monitor file integrity. Monitor: start with NVD and other vulnerability alerts, not scanner results. Evaluate: public-facing system? workstation? internal server? (risk rank) Decide:on action and timeline Test: pre-test patches (stability, functionality, rollback) for change control Install: notify, change control, tickets McAfee Secure & Trustmarks — a Hacker's Best Friend Jay James, Shane MacDougall, Tactical Intelligence Inc., Canada "McAfee Secure Trustmark" is a website seal marketed by McAfee. A website gets this badge if they pass their remote scanning. The problem is a removal of trustmarks act as flags that you're vulnerable. Easy to view status change by viewing McAfee list on website or on Google. "Secure TrustGuard" is similar to McAfee. Jay and Shane wrote Perl scripts to gather sites from McAfee and search engines. If their certification image changes to a 1x1 pixel image, then they are longer certified. Their scripts take deltas of scans to see what changed daily. The bottom line is change in TrustGuard status is a flag for hackers to attack your site. Entire idea of seals is silly—you're raising a flag saying if you're vulnerable.

    Read the article

  • 8 Mac System Features You Can Access in Recovery Mode

    - by Chris Hoffman
    A Mac’s Recovery Mode is for more than just reinstalling Mac OS X. You’ll find many other useful troubleshooting utilities here — you can use these even if your Mac can’t boot normally. To access Recovery Mode, restart your Mac and press and hold the Command + R keys during the boot-up process. This is one of several hidden startup options on a Mac. Reinstall Mac OS X Most people know Recovery Mode as the place you go to reinstall OS X on your Mac. Recovery Mode will download the OS X installer files from teh Intenret if you don’t have them locally, so they don’t take up space on your disk and you’ll never have to hunt for an opearign system disc. Better yet, it will download up-to-date installation files so you don’t have to spend hours installing operating system updates later. Microsoft could learn a lot from Apple here. Restore From a Time Machine Backup Instead of reinstalling OS X, you can choose to restore your Mac from a time machine backup. This is like restoring a system image on another operating system. You’ll need an external disk containing a backup image created on the current computer to do this. Browse the Web The Get Help Online link opens the Safari web browser to Apple’s documentation site. It’s not limited to Apple’s website, though — you can navigate to any website you like. This feature allows you to access and use a browser on your Mac even if it isn’t booting properly. It’s ideal for looking up troubleshooting information. Manage Your Disks The Disk Utility option opens the same Disk Utility you can access from within Mac OS X. It allows you to partition disks, format them, scan disks for problems, wipe drives, and set up drives in a RAID configuration. If you need to edit partitions from outside your operating system, you can just boot into the recovery environment — you don’t have to download a special partitioning tool and boot into it. Choose the Default Startup Disk Click the Apple menu on the bar at the top of your screen and select Startup Disk to access the Choose Startup Disk tool. Use this tool to choose your computer’s default startup disk and reboot into another operating system. For example, it’s useful if you have Windows installed alongside Mac OS X with Boot Camp. Add or Remove an EFI Firmware Password You can also add a firmware password to your Mac. This works like a BIOS password or UEFI password on a Windows or Linux PC. Click the Utilities menu on the bar at the top of your screen and select Firmware Password Utility to open this tool. Use the tool to turn on a firmware password, which will prevent your computer from starting up from a different hard disk, CD, DVD, or USB drive without the password you provide. This prevents people form booting up your Mac with an unauthorized operating system. If you’ve already enabled a firmware password, you can remove it from here. Use Network Tools to Troubleshoot Your Connection Select Utilities > Network Utility to open a network diagnostic tool. This utility provides a graphical way to view your network connection information. You can also use the netstat, ping, lookup, traceroute, whois, finger, and port scan utilities from here. These can be helpful to troubleshoot Internet connection problems. For example, the ping command can demonstrate whether you can communicate with a remote host and show you if you’re experiencing packet loss, while the traceroute command can show you where a connection is failing if you can’t connect to a remote server. Open a Terminal If you’d like to get your hands dirty, you can select Utilities > Terminal to open a terminal from here. This terminal allows you to do more advanced troubleshooting. Mac OS X uses the bash shell, just as typical Linux distributions do. Most people will just need to use the Reinstall Mac OS X option here, but there are many other tools you can benefit from. If the Recovery Mode files on your Mac are damaged or unavailable, your Mac will automatically download them from Apple so you can use the full recovery environment.

    Read the article

  • Creating HTML5 Offline Web Applications with ASP.NET

    - by Stephen Walther
    The goal of this blog entry is to describe how you can create HTML5 Offline Web Applications when building ASP.NET web applications. I describe the method that I used to create an offline Web application when building the JavaScript Reference application. You can read about the HTML5 Offline Web Application standard by visiting the following links: Offline Web Applications Firefox Offline Web Applications Safari Offline Web Applications Currently, the HTML5 Offline Web Applications feature works with all modern browsers with one important exception. You can use Offline Web Applications with Firefox, Chrome, and Safari (including iPhone Safari). Unfortunately, however, Internet Explorer does not support Offline Web Applications (not even IE 9). Why Build an HTML5 Offline Web Application? The official reason to build an Offline Web Application is so that you do not need to be connected to the Internet to use it. For example, you can use the JavaScript Reference Application when flying in an airplane, riding a subway, or hiding in a cave in Borneo. The JavaScript Reference Application works great on my iPhone even when I am completely disconnected from any network. The following screenshot shows the JavaScript Reference Application running on my iPhone when airplane mode is enabled (notice the little orange airplane):   Admittedly, it is becoming increasingly difficult to find locations where you can’t get Internet access. A second, and possibly better, reason to create Offline Web Applications is speed. An Offline Web Application must be downloaded only once. After it gets downloaded, all of the files required by your Web application (HTML, CSS, JavaScript, Image) are stored persistently on your computer. Think of Offline Web Applications as providing you with a super browser cache. Normally, when you cache files in a browser, the files are cached on a file-by-file basis. For each HTML, CSS, image, or JavaScript file, you specify how long the file should remain in the cache by setting cache headers. Unlike the normal browser caching mechanism, the HTML5 Offline Web Application cache is used to specify a caching policy for an entire set of files. You use a manifest file to list the files that you want to cache and these files are cached until the manifest is changed. Another advantage of using the HTML5 offline cache is that the HTML5 standard supports several JavaScript events and methods related to the offline cache. For example, you can be notified in your JavaScript code whenever the offline application has been updated. You can use JavaScript methods, such as the ApplicationCache.update() method, to update the cache programmatically. Creating the Manifest File The HTML5 Offline Cache uses a manifest file to determine the files that get cached. Here’s what the manifest file looks like for the JavaScript Reference application: CACHE MANIFEST # v30 Default.aspx # Standard Script Libraries Scripts/jquery-1.4.4.min.js Scripts/jquery-ui-1.8.7.custom.min.js Scripts/jquery.tmpl.min.js Scripts/json2.js # App Scripts App_Scripts/combine.js App_Scripts/combine.debug.js # Content (CSS & images) Content/default.css Content/logo.png Content/ui-lightness/jquery-ui-1.8.7.custom.css Content/ui-lightness/images/ui-bg_glass_65_ffffff_1x400.png Content/ui-lightness/images/ui-bg_glass_100_f6f6f6_1x400.png Content/ui-lightness/images/ui-bg_highlight-soft_100_eeeeee_1x100.png Content/ui-lightness/images/ui-icons_222222_256x240.png Content/ui-lightness/images/ui-bg_glass_100_fdf5ce_1x400.png Content/ui-lightness/images/ui-bg_diagonals-thick_20_666666_40x40.png Content/ui-lightness/images/ui-bg_gloss-wave_35_f6a828_500x100.png Content/ui-lightness/images/ui-icons_ffffff_256x240.png Content/ui-lightness/images/ui-icons_ef8c08_256x240.png Content/browsers/c8.png Content/browsers/es3.png Content/browsers/es5.png Content/browsers/ff3_6.png Content/browsers/ie8.png Content/browsers/ie9.png Content/browsers/sf5.png NETWORK: Services/EntryService.svc http://superexpert.com/resources/JavaScriptReference/ A Cache Manifest file always starts with the line of text Cache Manifest. In the manifest above, all of the CSS, image, and JavaScript files required by the JavaScript Reference application are listed. For example, the Default.aspx ASP.NET page, jQuery library, JQuery UI library, and several images are listed. Notice that you can add comments to a manifest by starting a line with the hash character (#). I use comments in the manifest above to group JavaScript and image files. Finally, notice that there is a NETWORK: section of the manifest. You list any file that you do not want to cache (any file that requires network access) in this section. In the manifest above, the NETWORK: section includes the URL for a WCF Service named EntryService.svc. This service is called to get the JavaScript entries displayed by the JavaScript Reference. There are two important things that you need to be aware of when using a manifest file. First, all relative URLs listed in a manifest are resolved relative to the manifest file. The URLs listed in the manifest above are all resolved relative to the root of the application because the manifest file is located in the application root. Second, whenever you make a change to the manifest file, browsers will download all of the files contained in the manifest (all of them). For example, if you add a new file to the manifest then any browser that supports the Offline Cache standard will detect the change in the manifest and download all of the files listed in the manifest automatically. If you make changes to files in the manifest (for example, modify a JavaScript file) then you need to make a change in the manifest file in order for the new version of the file to be downloaded. The standard way of updating a manifest file is to include a comment with a version number. The manifest above includes a # v30 comment. If you make a change to a file then you need to modify the comment to be # v31 in order for the new file to be downloaded. When Are Updated Files Downloaded? When you make changes to a manifest, the changes are not reflected the very next time you open the offline application in your web browser. Your web browser will download the updated files in the background. This can be very confusing when you are working with JavaScript files. If you make a change to a JavaScript file, and you have cached the application offline, then the changes to the JavaScript file won’t appear when you reload the application. The HTML5 standard includes new JavaScript events and methods that you can use to track changes and make changes to the Application Cache. You can use the ApplicationCache.update() method to initiate an update to the application cache and you can use the ApplicationCache.swapCache() method to switch to the latest version of a cached application. My heartfelt recommendation is that you do not enable your application for offline storage until after you finish writing your application code. Otherwise, debugging the application can become a very confusing experience. Offline Web Applications versus Local Storage Be careful to not confuse the HTML5 Offline Web Application feature and HTML5 Local Storage (aka DOM storage) feature. The JavaScript Reference Application uses both features. HTML5 Local Storage enables you to store key/value pairs persistently. Think of Local Storage as a super cookie. I describe how the JavaScript Reference Application uses Local Storage to store the database of JavaScript entries in a separate blog entry. Offline Web Applications enable you to store static files persistently. Think of Offline Web Applications as a super cache. Creating a Manifest File in an ASP.NET Application A manifest file must be served with the MIME type text/cache-manifest. In order to serve the JavaScript Reference manifest with the proper MIME type, I added two files to the JavaScript Reference Application project: Manifest.txt – This text file contains the actual manifest file. Manifest.ashx – This generic handler sends the Manifest.txt file with the MIME type text/cache-manifest. Here’s the code for the generic handler: using System.Web; namespace JavaScriptReference { public class Manifest : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/cache-manifest"; context.Response.WriteFile(context.Server.MapPath("Manifest.txt")); } public bool IsReusable { get { return false; } } } } The Default.aspx file contains a reference to the manifest. The opening HTML tag in the Default.aspx file looks like this: <html manifest="Manifest.ashx"> Notice that the HTML tag contains a manifest attribute that points to the Manifest.ashx generic handler. Internet Explorer simply ignores this attribute. Every other modern browser will download the manifest when the Default.aspx page is requested. Seeing the Offline Web Application in Action The experience of using an HTML5 Web Application is different with different browsers. When you first open the JavaScript Reference application with Firefox, you get the following warning: Notice that you are provided with the choice of whether you want to use the application offline or not. Browsers other than Firefox, such as Chrome and Safari, do not provide you with this choice. Chrome and Safari will create an offline cache automatically. If you click the Allow button then Firefox will download all of the files listed in the manifest. You can view the files contained in the Firefox offline application cache by typing about:cache in the Firefox address bar: You can view the actual items being cached by clicking the List Cache Entries link: The Offline Web Application experience is different in the case of Google Chrome. You can view the entries in the offline cache by opening the Developer Tools (hit Shift+CTRL+I), selecting the Storage tab, and selecting Application Cache: Notice that you view the status of the Application Cache. In the screen shot above, the status is UNCACHED which means that the files listed in the manifest have not been downloaded and cached yet. The different possible values for the status are included in the HTML5 Offline Web Application standard: UNCACHED – The Application Cache has not been initialized. IDLE – The Application Cache is not currently being updated. CHECKING – The Application Cache is being fetched and checked for updates. DOWNLOADING – The files in the Application Cache are being updated. UPDATEREADY – There is a new version of the Application. OBSOLETE – The contents of the Application Cache are obsolete. Summary In this blog entry, I provided a description of how you can use the HTML5 Offline Web Application feature in the context of an ASP.NET application. I described how this feature is used with the JavaScript Reference Application to store the entire application on a user’s computer. By taking advantage of this new feature of the HTML5 standard, you can improve the performance of your ASP.NET web applications by requiring users of your web application to download your application once and only once. Furthermore, you can enable users to take advantage of your applications anywhere -- regardless of whether or not they are connected to the Internet.

    Read the article

  • Communication Between Your PC and Azure VM via Windows Azure Connect

    - by Shaun
    With the new release of the Windows Azure platform there are a lot of new features available. In my previous post I introduced a little bit about one of them, the remote desktop access to azure virtual machine. Now I would like to talk about another cool stuff – Windows Azure Connect.   What’s Windows Azure Connect I would like to quote the definition of the Windows Azure Connect in MSDN With Windows Azure Connect, you can use a simple user interface to configure IP-sec protected connections between computers or virtual machines (VMs) in your organization’s network, and roles running in Windows Azure. IP-sec protects communications over Internet Protocol (IP) networks through the use of cryptographic security services. There’s an image available at the MSDN as well that I would like to forward here As we can see, using the Windows Azure Connect the Worker Role 1 and Web Role 1 are connected with the development machines and database servers which some of them are inside the organization some are not. With the Windows Azure Connect, the roles deployed on the cloud could consume the resource which located inside our Intranet or anywhere in the world. That means the roles can connect to the local database, access the local shared resource such as share files, folders and printers, etc.   Difference between Windows Azure Connect and AppFabric It seems that the Windows Azure Connect are duplicated with the Windows Azure AppFabric. Both of them are aiming to solve the problem on how to communication between the resource in the cloud and inside the local network. The table below lists the differences in my understanding. Category Windows Azure Connect Windows Azure AppFabric Purpose An IP-sec connection between the local machines and azure roles. An application service running on the cloud. Connectivity IP-sec, Domain-joint Net Tcp, Http, Https Components Windows Azure Connect Driver Service Bus, Access Control, Caching Usage Azure roles connect to local database server Azure roles use local shared files,  folders and printers, etc. Azure roles join the local AD. Expose the local service to Internet. Move the authorization process to the cloud. Integrate with existing identities such as Live ID, Google ID, etc. with existing local services. Utilize the distributed cache.   And also some scenarios on which of them should be used. Scenario Connect AppFabric I have a service deployed in the Intranet and I want the people can use it from the Internet.   Y I have a website deployed on Azure and need to use a database which deployed inside the company. And I don’t want to expose the database to the Internet. Y   I have a service deployed in the Intranet and is using AD authorization. I have a website deployed on Azure which needs to use this service. Y   I have a service deployed in the Intranet and some people on the Internet can use it but need to be authorized and authenticated.   Y I have a service in Intranet, and a website deployed on Azure. This service can be used from Internet and that website should be able to use it as well by AD authorization for more functionalities. Y Y   How to Enable Windows Azure Connect OK we talked a lot information about the Windows Azure Connect and differences with the Windows Azure AppFabric. Now let’s see how to enable and use the Windows Azure Connect. First of all, since this feature is in CTP stage we should apply before use it. On the Windows Azure Portal we can see our CTP features status under Home, Beta Program page. You can send the apply to join the Beta Programs to Microsoft in this page. After a few days the Microsoft will send an email to you (the email of your Live ID) when it’s available. In my case we can see that the Windows Azure Connect had been activated by Microsoft and then we can click the Connect button on top, or we can click the Virtual Network item from the left navigation bar.   The first thing we need, if it’s our first time to enter the Connect page, is to enable the Windows Azure Connect. After that we can see our Windows Azure Connect information in this page.   Add a Local Machine to Azure Connect As we explained below the Windows Azure Connect can make an IP-sec connection between the local machines and azure role instances. So that we firstly add a local machine into our Azure Connect. To do this we will click the Install Local Endpoint button on top and then the portal will give us an URL. Copy this URL to the machine we want to add and it will download the software to us. This software will be installed in the local machines which we want to join the Connect. After installed there will be a tray-icon appeared to indicate this machine had been joint our Connect. The local application will be refreshed to the Windows Azure Platform every 5 minutes but we can click the Refresh button to let it retrieve the latest status at once. Currently my local machine is ready for connect and we can see my machine in the Windows Azure Portal if we switched back to the portal and selected back Activated Endpoints node.   Add a Windows Azure Role to Azure Connect Let’s create a very simple azure project with a basic ASP.NET web role inside. To make it available on Windows Azure Connect we will open the azure project property of this role from the solution explorer in the Visual Studio, and select the Virtual Network tab, check the Activate Windows Azure Connect. The next step is to get the activation token from the Windows Azure Portal. In the same page there is a button named Get Activation Token. Click this button then the portal will display the token to me. We copied this token and pasted to the box in the Visual Studio tab. Then we deployed this application to azure. After completed the deployment we can see the role instance was listed in the Windows Azure Portal - Virtual Connect section.   Establish the Connect Group The final task is to create a connect group which contains the machines and role instances need to be connected each other. This can be done in the portal very easy. The machines and instances will NOT be connected until we created the group for them. The machines and instances can be used in one or more groups. In the Virtual Connect section click the Groups and Roles node from the left side navigation bar and clicked the Create Group button on top. This will bring up a dialog to us. What we need to do is to specify a group name, description; and then we need to select the local computers and azure role instances into this group. After the Azure Fabric updated the group setting we can see the groups and the endpoints in the page. And if we switch back to the local machine we can see that the tray-icon have been changed and the status turned connected. The Windows Azure Connect will update the group information every 5 minutes. If you find the status was still in Disconnected please right-click the tray-icon and select the Refresh menu to retrieve the latest group policy to make it connected.   Test the Azure Connect between the Local Machine and the Azure Role Instance Now our local machine and azure role instance had been connected. This means each of them can communication to others in IP level. For example we can open the SQL Server port so that our azure role can connect to it by using the machine name or the IP address. The Windows Azure Connect uses IPv6 to connect between the local machines and role instances. You can get the IP address from the Windows Azure Portal Virtual Network section when select an endpoint. I don’t want to take a full example for how to use the Connect but would like to have two very simple tests. The first one would be PING.   When a local machine and role instance are connected through the Windows Azure Connect we can PING any of them if we opened the ICMP protocol in the Filewall setting. To do this we need to run a command line before test. Open the command window on the local machine and the role instance, execute the command as following netsh advfirewall firewall add rule name="ICMPv6" dir=in action=allow enable=yes protocol=icmpv6 Thanks to Jason Chen, Patriek van Dorp, Anton Staykov and Steve Marx, they helped me to enable  the ICMPv6 setting. For the full discussion we made please visit here. You can use the Remote Desktop Access feature to logon the azure role instance. Please refer my previous blog post to get to know how to use the Remote Desktop Access in Windows Azure. Then we can PING the machine or the role instance by specifying its name. Below is the screen I PING my local machine from my azure instance. We can use the IPv6 address to PING each other as well. Like the image following I PING to my role instance from my local machine thought the IPv6 address.   Another example I would like to demonstrate here is folder sharing. I shared a folder in my local machine and then if we logged on the role instance we can see the folder content from the file explorer window.   Summary In this blog post I introduced about another new feature – Windows Azure Connect. With this feature our local resources and role instances (virtual machines) can be connected to each other. In this way we can make our azure application using our local stuff such as database servers, printers, etc. without expose them to Internet.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • CodePlex Daily Summary for Wednesday, May 12, 2010

    CodePlex Daily Summary for Wednesday, May 12, 2010New ProjectsAMP (Adamo Media Player): AMP is a custom media player that specializes in large media collections! Change the way you manage your media!BoogiePartners: BoogiePartners provides a loose collection of utilities or extensions for Boogie and SpecSharp developers.C# Developer Utility Library: Collection of helpful code functions including zero-config rolling file logging, parameter validation, reflection & object construction, I18N count...Commerce Server 2009 Orders using Pipelines in a Console Application: Using Commerce Server 2009 outside of a Web Application this project show how to create dummy Orders using Pipelines in a NON WEB CONTEXT, an examp...CRM Queue Manager C#: CRM 4 Queue Manager written in C#. Based on the original VB.Net CRM Queue Manager with some changes, additions and feature changes. Converts emai...DbBuilder: This is a tool indended for creation of MS SQL database from scratch, incremental updates, management of redeployable objects, etc. using scripted ...DbNetData: A collection of cross vendor database interface classes for .NET written in C# providing a consistent and simplified way of accessing SQL Server,Or...Deploy Workflow Manager: Kick off a workflow associated with a SharePoint List on all list items at once. SharePoint Designer Workflows will also be included. This projec...DigiLini - digital on screen ruler: Handy desktop tool for everyone who does ever do something graphical on his computer. There is an sizable horizontal and vertial ruler. You can pos...Dynamic Grid Data Type for Umbraco: The Dynamic Grid Data Type for Umbraco is a custom ASCX/C# control that was created to store tabular data as an Umbraco "Data Type". There's an abi...FlashPlus: An extension for Google Chrome and a bookmarklet for Internet Explorer and Firefox, this project makes media content on browsers more usable. This...Flat Database: Simple project making it really quick and easy, to implement a simple flat-file database for an application.Flood Watch Spam Control: Flood watch is a C# based application meant to help prevent site black listings. This application monitors outbound smtp streams, if a stream excee...Gamebox: gameboxKill Bill: Kill Bill covers the areas of customers, suppliers, products, sales and administration divided into modules which together form the system for the ...KND Decoder: Der KND Decoder konvertiert die lästigen Zahlenwerte, in den Java Dateien vom Knuddels Applet, zu lesbaren Strings. Die neue dekodier Methode wurde...Moonbeam: D3D11 FrameworkSite Defender: Add-on for blogs or anywhere else there could be people spamming your website.sMODfix: -Stringzilla: A string formatting classes designed for .NET 4 to enable named string formatting and conditional string formatting options based upon bound data c...The CQRS Kitchen: The CQRS Kitchen is an example application build with Silverlight 4 that demonstrates how to implement a CQRS / Event Sourcing application with the...XELF Framework for XNA / .NET: XELF Framework for XNA / .NET  XNA Game Studioおよび.NET Framework用のライブラリ・フレームワークとしてXELFが開発したC#ソースコードを含むプロジェクトです。  現在は、「XELF.Framework」のWindows用XNA G...xxxxxxxxx: xxxxxxxNew ReleasesASP.NET MVC 2 - CommonLibrary.CMS: CommonLibrary CMS - Alpha: CommonLibrary CMSA simple yet powerful CMS system in ASP.NET MVC 2 using C# 3.5. ActiveRecord based support for Blogs, Widgets, Pages, Parts, Ev...ASP.NET MVC Extensions: v1.0 RTM: v1.0 RTMAWS SimpleDB Browser: Release 2: Miscellaneous fixes from the Alpha release. Built to work with the released version of the .Net Framework 4.0.BIDS Helper: 1.4.3.0: This release addresses the following issues: For some people the BIDS Helper extensions to the Project Properties page for the SSIS Deploy plugin w...Bojinx: Bojinx Core V4.5.13: Fixes / Enhancements: Sequencer now accepts Commands directly without using events. Command queue now accepts both events and commands EventBu...CF-Soft: TestCases_DROP1: TestCases_DROP1CodePlex Runtime Intelligence Integration: PreEmptive.Attributes distributable: Contains a signed redistributable version of the PreEmptive.Attributes.dll library that non .NET 4 applications can use support in-code instrumenta...Convection Game Engine (Basic Edition): Convection Basic (44772): Compiled version of Convection Basic change set 44772.CRM Queue Manager C#: Initial release: Initial release of the CRM Queue Manager in C#. Release includes both the installer as well as the latest source code in zippped format. Running...DbNetData: DbNetData.1.0: Initial releaseDigiLini - digital on screen ruler: DigiLini 1.0: First stable version.Facturator - Create invoices easy and fast: Facturator.zip: current stable versionFlat Database: Initial Release: This is the working initial release of FlatDBKharaPOS: MSDN Magazine Sample: This is the release that supports the MSDN magazine article "Enterprise Patterns with WCF RIA Services. Some of the project was affected by the upg...Let's Up: 1.1 (Build 100511): - Add short and long break feature.Mesopotamia Experiment: Mesopotamia 1.2.88: Primarly bug fixes... Bug Fixes - fixed bug in synapse mutating whereby new ones were of one side only eg, source or target - fixed bug in screen ...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Class Libraries, version 1.0.1.123: The NodeXL class libraries can be used to display network graphs in .NET applications. To include a NodeXL network graph in a WPF desktop or Windo...NodeXL: Network Overview, Discovery and Exploration for Excel: NodeXL Excel 2007 Template, version 1.0.1.123: The NodeXL Excel 2007 template displays a network graph using edge and vertex lists stored in an Excel 2007 workbook. What's NewThis version makes...Over Store: OverStore 1.18.0.0: Version number is increased. New callback events added: Persisting and Persisted, which are raised on actually writing object data into the stora...Reusable Library: V1.0.7: A collection of reusable abstractions for enterprise application developerReusable Library Demo: V1.0.5: A demonstration of reusable abstractions for enterprise application developerRuntime Intelligence Endpoint Starter Kit: Runtime Intelligence Endpoint Starter Kit 20100511: Added crossdomain policy files so Silverlight applications can send usage data from anywhere.Scorm2CC: SCORM2CC Release 0.12.0.58711 net-2.0 Alpha: SCORM2CC Release 0.12.0.58711 net-2.0 AlphaShake - C# Make: Shake v0.1.14: Updated API to match with the current documentation.SharePoint LogViewer: SharePoint Log Viewer 2.6: This is a maintenance release. It has bug several fixes.Site Directory for SharePoint 2010 (from Microsoft Consulting Services, UK): v1.4: As 1.3 with the following changes Addition of a 'Site Data' webpart Site Directory can now be a site collection root or sub-site Scan job now ...sMODfix: sMODfix v1.0: Basic Versionsmtp4dev: smtp4dev 2.0: Smtp4dev 2.0 is powered by a completely re-written server component and now offers SSL/TLS and AUTH support.SocialScapes: SocialScapes Sidebar 1.0: The SocialScapes Sidebar is the first release of the new SocialScapes suite. There will be more modules to come along with a complete data aggrega...SSIS Multiple Hash: Multiple Hash V1.2: This is version 1.2 of the Multiple Hash SSIS Component. It supports SQL 2005 and SQL 2008, although you have to download the correct install pack...Surfium: Beta build: Somehow testedTerminals: Terminals 1.9a - RDP6 Support: The major change in this release is that Terminals has been upgraded to require RDP Client 6 to be installed for creating RDP connections. Backward...TFS Compare: Release 3.0: This release provides a new feature - the ability to navigate back and forth between document differences. Also, this release provides support for...VCC: Latest build, v2.1.30511.0: Automatic drop of latest buildyoutubeFisher: youtubeFisher v2.0: What's new new method of youtube parameters capturing HD 720p video support full-HD 1080p video support add Cancel option to stop file downl...Most Popular ProjectsWBFS ManagerRawrAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active Projectspatterns & practices – Enterprise LibraryMirror Testing SystemRawrThe Information Literacy Education Learning Environment (ILE)Caliburn: An Application Framework for WPF and SilverlightwhiteBlogEngine.NETPHPExceljQuery Library for SharePoint Web ServicesTweetSharp

    Read the article

  • How to Configure Windows Machine to Allow File Sharing with DNS Alias

    - by Michael Ferrante
    I have not seen a single article posted anywhere online that brings together all the settings one would need to do to make this work properly on Windows, so I thought I would post it here. To facilitate failover schemes, a common technique is to use DNS CNAME records (DNS Aliases) for different machine roles. Then instead of changing the Windows computername of the actual machine name, one can switch a DNS record to point to a new host. This can work on Microsoft Windows machines, but to make it work with file sharing the following configuration steps need to be taken. Outline The Problem The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) Providing browse capabilities for multiple NetBIOS names (OptionalNames) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) References 1. The Problem On Windows machines, file sharing can work via the computer name, with or without full qualification, or by the IP Address. By default, however, filesharing will not work with arbitrary DNS aliases. To enable filesharing and other Windows services to work with DNS aliases, you must make registry changes as detailed below and reboot the machine. 2. The Solution Allowing other machines to use filesharing via the DNS Alias (DisableStrictNameChecking) This change alone will allow other machines on the network to connect to the machine using any arbitrary hostname. (However this change will not allow a machine to connect to itself via a hostname, see BackConnectionHostNames below). Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value DisableStrictNameChecking of type DWORD set to 1. Allowing server machine to use filesharing with itself via the DNS Alias (BackConnectionHostNames) This change is necessary for a DNS alias to work with filesharing from a machine to find itself. This creates the Local Security Authority host names that can be referenced in an NTLM authentication request. To do this, follow these steps for all the nodes on the client computer: To the registry subkey HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Lsa\MSV1_0, add new Multi-String Value BackConnectionHostNames In the Value data box, type the CNAME or the DNS alias, that is used for the local shares on the computer, and then click OK. Note: Type each host name on a separate line. Providing browse capabilities for multiple NetBIOS names (OptionalNames) Allows ability to see the network alias in the network browse list. Edit the registry key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\lanmanserver\parameters and add a value OptionalNames of type Multi-String Add in a newline delimited list of names that should be registered under the NetBIOS browse entries Names should match NetBIOS conventions (i.e. not FQDN, just hostname) Register the Kerberos service principal names (SPNs) for other Windows functions like Printing (setspn) NOTE: Should not need to do this for basic functions to work, documented here for completeness. We had one situation in which the DNS alias was not working because there was an old SPN record interfering, so if other steps aren't working check if there are any stray SPN records. You must register the Kerberos service principal names (SPNs), the host name, and the fully-qualified domain name (FQDN) for all the new DNS alias (CNAME) records. If you do not do this, a Kerberos ticket request for a DNS alias (CNAME) record may fail and return the error code KDC_ERR_S_SPRINCIPAL_UNKNOWN. To view the Kerberos SPNs for the new DNS alias records, use the Setspn command-line tool (setspn.exe). The Setspn tool is included in Windows Server 2003 Support Tools. You can install Windows Server 2003 Support Tools from the Support\Tools folder of the Windows Server 2003 startup disk. How to use the tool to list all records for a computername: setspn -L computername To register the SPN for the DNS alias (CNAME) records, use the Setspn tool with the following syntax: setspn -A host/your_ALIAS_name computername setspn -A host/your_ALIAS_name.company.com computername 3. References All the Microsoft references work via: http://support.microsoft.com/kb/ Connecting to SMB share on a Windows 2000-based computer or a Windows Server 2003-based computer may not work with an alias name Covers the basics of making file sharing work properly with DNS alias records from other computers to the server computer. KB281308 Error message when you try to access a server locally by using its FQDN or its CNAME alias after you install Windows Server 2003 Service Pack 1: "Access denied" or "No network provider accepted the given network path" Covers how to make the DNS alias work with file sharing from the file server itself. KB926642 How to consolidate print servers by using DNS alias (CNAME) records in Windows Server 2003 and in Windows 2000 Server Covers more complex scenarios in which records in Active Directory may need to be updated for certain services to work properly and for browsing for such services to work properly, how to register the Kerberos service principal names (SPNs). KB870911 Distributed File System update to support consolidation roots in Windows Server 2003 Covers even more complex scenarios with DFS (discusses OptionalNames). KB829885

    Read the article

  • Wireless will not connect

    - by azz0r
    Hello, I have installed Ubuntu 10.10 on the same machine as my windows setup. However, it will not connect to my wireless network. It can see its there, it can attempt to connect, yet it will never connect. It will keep bringing up the password prompt everyso often. I have tried turning my security to WEP, I ended up turning it back to WPA2. It is set to AES (noted a few threads on google about that). Can you assist? I would love to dive into Ubuntu, but without the internet its pointless. --- lshw -C network --- *-network description: Ethernet interface product: RTL8111/8168B PCI Express Gigabit Ethernet controller vendor: Realtek Semiconductor Co., Ltd. physical id: 0 bus info: pci@0000:02:00.0 logical name: eth0 version: 02 serial: 00:1d:92:ea:cc:62 capacity: 1GB/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress msix vpd bus_master cap_list rom ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=r8168 driverversion=8.020.00-NAPI duplex=half latency=0 link=no multicast=yes port=twisted pair resources: irq:29 ioport:e800(size=256) memory:feaff000-feafffff memory:f8ff0000-f8ffffff(prefetchable) memory:feac0000-feadffff(prefetchable) *-network description: Wireless interface physical id: 1 logical name: wlan0 serial: 00:15:af:72:a4:38 capabilities: ethernet physical wireless configuration: broadcast=yes multicast=yes wireless=IEEE 802.11bgn --- iwconfig ---- lo no wireless extensions. eth0 no wireless extensions. wlan0 IEEE 802.11bgn ESSID:"Wuggawoo" Mode:Managed Frequency:2.437 GHz Access Point: Not-Associated Tx-Power=9 dBm Retry long limit:7 RTS thr:off Fragment thr:off Encryption key:off Power Management:on --- cat /etc/network/interfaces ---- auto lo iface lo inet loopback logs deamon.log --- Jan 19 04:17:09 ubuntu wpa_supplicant[1289]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 04:17:09 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:17:09 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:17:11 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:17:11 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:17:11 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0/wireless): association took too long. Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): device state change: 5 -> 6 (reason 0) Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0/wireless): asking for new secrets Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) scheduled... Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) started... Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): device state change: 6 -> 4 (reason 0) Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) scheduled... Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 1 of 5 (Device Prepare) complete. Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) starting... Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): device state change: 4 -> 5 (reason 0) Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0/wireless): connection 'Wuggawoo' has security, and secrets exist. No new secrets needed. Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'ssid' value 'Wuggawoo' Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'scan_ssid' value '1' Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'key_mgmt' value 'WPA-PSK' Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: added 'psk' value '<omitted>' Jan 19 04:17:12 ubuntu NetworkManager: nm_setting_802_1x_get_pkcs11_engine_path: assertion `NM_IS_SETTING_802_1X (setting)' failed Jan 19 04:17:12 ubuntu NetworkManager: nm_setting_802_1x_get_pkcs11_module_path: assertion `NM_IS_SETTING_802_1X (setting)' failed Jan 19 04:17:12 ubuntu NetworkManager: <info> Activation (wlan0) Stage 2 of 5 (Device Configure) complete. Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:17:12 ubuntu NetworkManager: <info> Config: set interface ap_scan to 1 Jan 19 04:17:12 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:17:13 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:17:13 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:17:13 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:17:23 ubuntu wpa_supplicant[1289]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 04:17:23 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:17:23 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:17:24 ubuntu AptDaemon: INFO: Initializing daemon Jan 19 04:17:25 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:17:25 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:17:25 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:17:27 ubuntu NetworkManager: <info> wlan0: link timed out. --- kern.log --- Jan 19 04:18:11 ubuntu kernel: [ 142.420024] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out Jan 19 04:18:13 ubuntu kernel: [ 144.333847] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 1) Jan 19 04:18:13 ubuntu kernel: [ 144.539996] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 2) Jan 19 04:18:13 ubuntu kernel: [ 144.750027] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 3) Jan 19 04:18:14 ubuntu kernel: [ 144.940022] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out Jan 19 04:18:25 ubuntu kernel: [ 155.832995] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 1) Jan 19 04:18:25 ubuntu kernel: [ 156.030046] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 2) Jan 19 04:18:25 ubuntu kernel: [ 156.230039] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 3) Jan 19 04:18:25 ubuntu kernel: [ 156.430039] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out --- syslog --- Jan 19 04:18:46 ubuntu wpa_supplicant[1289]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 04:18:46 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 04:18:46 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 04:18:48 ubuntu wpa_supplicant[1289]: WPS-AP-AVAILABLE Jan 19 04:18:48 ubuntu wpa_supplicant[1289]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 04:18:48 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 04:18:48 ubuntu kernel: [ 178.833905] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 1) Jan 19 04:18:48 ubuntu kernel: [ 179.030035] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 2) Jan 19 04:18:48 ubuntu kernel: [ 179.230020] wlan0: direct probe to AP 94:44:52:0d:22:0d (try 3) Jan 19 04:18:48 ubuntu kernel: [ 179.433634] wlan0: direct probe to AP 94:44:52:0d:22:0d timed out lspci and lsusb lspci -- 00:00.0 Host bridge: Advanced Micro Devices [AMD] RS780 Host Bridge 00:02.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (ext gfx port 0) 00:05.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 1) 00:06.0 PCI bridge: Advanced Micro Devices [AMD] RS780 PCI to PCI bridge (PCIE port 2) 00:11.0 SATA controller: ATI Technologies Inc SB700/SB800 SATA Controller [AHCI mode] 00:12.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:12.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:12.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:13.0 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI0 Controller 00:13.1 USB Controller: ATI Technologies Inc SB700 USB OHCI1 Controller 00:13.2 USB Controller: ATI Technologies Inc SB700/SB800 USB EHCI Controller 00:14.0 SMBus: ATI Technologies Inc SBx00 SMBus Controller (rev 3a) 00:14.1 IDE interface: ATI Technologies Inc SB700/SB800 IDE Controller 00:14.2 Audio device: ATI Technologies Inc SBx00 Azalia (Intel HDA) 00:14.3 ISA bridge: ATI Technologies Inc SB700/SB800 LPC host controller 00:14.4 PCI bridge: ATI Technologies Inc SBx00 PCI to PCI Bridge 00:14.5 USB Controller: ATI Technologies Inc SB700/SB800 USB OHCI2 Controller 00:18.0 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] HyperTransport Configuration 00:18.1 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Address Map 00:18.2 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] DRAM Controller 00:18.3 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Miscellaneous Control 00:18.4 Host bridge: Advanced Micro Devices [AMD] K10 [Opteron, Athlon64, Sempron] Link Control 01:00.0 VGA compatible controller: nVidia Corporation G80 [GeForce 8800 GTS] (rev a2) 02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168B PCI Express Gigabit Ethernet controller (rev 02) 03:00.0 FireWire (IEEE 1394): JMicron Technology Corp. IEEE 1394 Host Controller -- lsusb -- Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 003: ID 046d:c517 Logitech, Inc. LX710 Cordless Desktop Laser Bus 004 Device 002: ID 045e:0730 Microsoft Corp. Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 002 Device 003: ID 13d3:3247 IMC Networks 802.11 n/g/b Wireless LAN Adapter Bus 002 Device 002: ID 0718:0628 Imation Corp. Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 046d:08c2 Logitech, Inc. QuickCam PTZ Bus 001 Device 002: ID 0424:2228 Standard Microsystems Corp. 9-in-2 Card Reader Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub With no security on my router I still can't connect, I get: Jan 19 15:58:01 ubuntu wpa_supplicant[1165]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 15:58:01 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 15:58:01 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: disconnected -> scanning Jan 19 15:58:02 ubuntu wpa_supplicant[1165]: WPS-AP-AVAILABLE Jan 19 15:58:02 ubuntu wpa_supplicant[1165]: Trying to associate with 94:44:52:0d:22:0d (SSID='Wuggawoo' freq=2437 MHz) Jan 19 15:58:02 ubuntu wpa_supplicant[1165]: Association request to the driver failed Jan 19 15:58:02 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: scanning -> associating Jan 19 15:58:05 ubuntu NetworkManager: <info> wlan0: link timed out. Jan 19 15:58:07 ubuntu wpa_supplicant[1165]: Authentication with 94:44:52:0d:22:0d timed out. Jan 19 15:58:07 ubuntu NetworkManager: <info> (wlan0): supplicant connection state: associating -> disconnected Jan 19 15:58:07 ubuntu NetworkManager: <info> (wlan0): supplicant connec

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Stackify Aims to Put More ‘Dev’ in ‘DevOps’

    - by Matt Watson
    Originally published on VisualStudioMagazine.com on 8/22/2012 by Keith Ward.The Kansas City-based startup wants to make it easier for developers to examine the network stack and find problems in code.The first part of “DevOps” is “Dev”. But according to Matt Watson, Devs aren’t connected enough with Ops, and it’s time that changed.He founded the startup company Stackify earlier this year to do something about it. Stackify gives developers unprecedented access to the IT side of the equation, Watson says, without putting additional burden on the system and network administrators who ultimately ensure the health of the environment.“We need a product designed for developers, with the goal of getting them more involved in operations and app support. Now, there’s next to nothing designed for developers,” Watson says. Stackify allows developers to search the network stack to troubleshoot problems in their software that might otherwise take days of coordination between development and IT teams to solve.Stackify allows developers to search log files, configuration files, databases and other infrastructure to locate errors. A key to this is that the developers are normally granted read-only access, soothing admin fears that developers will upload bad code to their servers.Implementation starts with data collection on the servers. Among the information gleaned is application discovery, server monitoring, file access, and other data collection, according to Stackify’s Web site. Watson confirmed that Stackify works seamlessly with virtualized environments as well.Although the data collection software must be installed on Windows servers, it can monitor both Windows and Linux servers. Once collection’s finished, developers have the kind of information they need, without causing heartburn for the IT staff.Stackify is a 100 percent cloud-based service. The company uses Windows Azure for hosting, a decision Watson’s happy with. With Azure, he says, “It’s nice to have all the dev tools like cache and table storage.” Although there have been a few glitches here and there with the service, it’s run very smoothly for the most part, he adds.Stackify is currently in a closed beta, with a public release scheduled for October. Watson says that pricing is expected to be $25 per month, per server, with volume discounts available. He adds that the target audience is companies with at least five developers.Watson founded Stackify after selling his last company, VinSolutions, to AutoTrader.com for “close to $150 million”, according to press accounts. Watson has since  founded the Watson Technology Group, which focuses on angel investing.About the Author: Keith Ward is the editor in chief of Visual Studio Magazine.

    Read the article

  • Four Emerging Payment Stories

    - by David Dorf
    The world of alternate payments has been moving fast of late.  Innovation in this area will help both consumers and retailers, but probably hurt the banks (at least that's the plan).  Here are four recent news items in this area: Dwolla, a start-up in Iowa, is trying to make credit cards obsolete.  Twelve guys in Des Moines are using $1.3M they raised to allow businesses to skip the credit card networks and avoid the fees.  Today they move about $1M a day across their network with an average transaction size of $500. Instead of charging merchants 2.9% plus $.30 per transaction, Dwolla charges a quarter -- yep, that coin featuring George Washington. Dwolla (Web + Dollar = Dwolla) avoids the credit networks and connects directly to bank accounts using the bank's ACH network.  They are signing up banks and merchants targeting both B2B and C2B as well as P2P payments.  They leverage social networks to notify people they have a money transfer, and also have a mobile app that uses GPS location. However, all is not rosy.  There have been complaints about unexpected chargebacks and with debit fees being reduced by the big banks, the need is not as pronounced.  The big banks are working on their own network called clearXchange that could provide stiff competition. VeriFone just bought European payment processor Point for around $1B.  By itself this would not have caught my attention except for the fact that VeriFone also announced the acquisition of GlobalBay earlier this month.  In addition to their core business of selling stand-beside payment terminals, with GlobalBay they get employee-operated mobile selling tools and with Point they get a very big payment processing platform. MasterCard and Intel announced a partnership around payments, starting with PayPass, MasterCard's new payment technology.  Intel will lend its expertise to add additional levels of security, which seems to be the biggest barrier for consumer adoption.  Everyone is scrambling to get their piece of cash transactions, which still represents 85% of all transactions. Apple was awarded another mobile payment patent further cementing the rumors that the iPhone 5 will support NFC payments.  As usual, Apple is upsetting the apple cart (sorry) by moving control of key data from the carriers to Apple.  With Apple's vast number of iTunes accounts, they have a ready-made customer base to use the payment infrastructure, which I bet will slowly transition people away from credit cards and toward cheaper ACH.  Gary Schwartz explains the three step process Apple is taking to become a payment processor. Below is a picture I drew representing payments in the retail industry. There's certainly a lot of innovation happening.

    Read the article

  • Querying Networking Statistics: dlstat(1M)

    - by user12612042
    Oracle Solaris 11 took another big leap forward in networking technologies providing a reliable, secure and scalable infrastructure to meet the growing needs of today's datacenter implementations. Oracle Solaris 11 introduced a new and powerful network stack architecture, also known as Project Crossbow. From Solaris 11 onwards, we introduced a command line tool viz. dlstat(1M) to query network statistics. dlstat (for datalink statistics) is a statistics querying counterpart for dladm(1M) - the datalink administration tool. The tool is very easy to get started. Just type dlstat on a shell prompt on Solaris 11 (or later). For example,: # dlstat LINK IPKTS RBYTES OPKTS OBYTES net0 834.11K 145.91M 575.19K 104.24M net1 7.87K 2.04M 0 0 In this example, the system has two datalinks net0 and net1. The output columns denote input packets/bytes as well as output packets/bytes. The numbers are abbreviated in xxx.xxUnit format. However, one could get the actual counts by simply running dlstat -u R (R for raw): # dlstat -u R LINK IPKTS RBYTES OPKTS OBYTES net0 834271 145931244 575246 104242934 net1 7869 2036958 0 0 In addition, dlstat also supports various subcommands dlstat help The following subcommands are supported: Stats : show-aggr show-ether show-link show-phys show-bridge For more info, run: dlstat help {default|} I will only describe couple of interesting subcommands/options here. For a comprehensive description of all the dlstat subcommands refer dlstat's official manual . For NICs that support multiple rings (e.g. ixgbe), dlstat show-phys -r allows us to query per Rx ring statistics. For example: dlstat show-phys -r net4 LINK TYPE INDEX IPKTS RBYTES net4 rx 0 0 0 net4 rx 1 0 0 net4 rx 2 0 0 net4 rx 3 0 0 net4 rx 4 0 0 net4 rx 5 0 0 net4 rx 6 0 0 net4 rx 7 0 0 In this case, net4 is just a vanity name for an ixgbe datalink. This view is especially useful if one wants to look at the network traffic spread across all the available rings. Furthermore, any of the dlstat commands could be run with -i option to periodically query and display stats. For example, running dlstat show-phys -r net4 -i 5 will emit per Rx ring stats every 5 seconds. This is especially useful while analyzing a live system. Similarly, dlstat show-phys -t could be used to query per Tx ring stats. -r and -t could also be combined as dlstat show-phys -rt to query both Rx as well as Tx stats at the same time. Finally, there is also a quick way to dump ALL the stats. Just run dlstat -A. You probably want to redirect this output to a file because you are going to get a whole load of stats :-).

    Read the article

  • Mouse and Keyboard Freeze

    - by Kev
    I installed Ubuntu 10.10 today and have had mouse problem since. Symptoms: At some arbitrary point in time (frequency: 2-3 times per hour), the mouse and keyboard stops working for ever(may be). I start System monitor, I found out network was shutdown just before mouse freeze. Some time my keyboard keep typing one key. For example:77777777777777777777777777777777777777777777777777777.....(it keep typing for 20 sec) I found out a script just solve the freeze problem:(I hit Powerbutton) -----------------/etc/acpi/powerbtn.sh------------------------ event=button[ /]power action=/usr/sbin/fix_mouse.sh -----------------/usr/sbin/fix_mouse.sh------------------------ rmmod psmouse modprobe psmouse Yesterday I install Ubuntu 10.04 FAILED also have mouse problem. When I switch back to Windows XP. The network card is down. It kept connecting and disconnecting 1 time per sec. CPU: i5 Motherboard: ASUS P7P55D OS: Windows XP + Ubuntu 10.10 Video Card: ATI 5770 Mouse,Keyboard: PS/2

    Read the article

  • 12.04 on VirtualBox -No internet access on Guest OS

    - by Muzab
    I have Windows 7 as my Host OS which has wired Ethernet working. My guest OS on VirtualBox is Ubuntu 12.04. The problem is that I cannot browse. I have: Enabled sharing on Windows; Have used NAT, Bridged Adapter etc ( all possible combinations); With NAT, used PCnet-FAST II (Am79c973) combination. Ubuntu says "connected to wired ethernet1" but I cannot access websites. VirtualBox Host-Only Network in the Network and Sharing center shows that I don't have IPV4 and IPV6 connectivity. Meaning I don't have DHCP configured. Strangely I have DNS address correct in the Ubuntu properties.

    Read the article

  • Move Files from a Failing PC with an Ubuntu Live CD

    - by Trevor Bekolay
    You’ve loaded the Ubuntu Live CD to salvage files from a failing system, but where do you store the recovered files? We’ll show you how to store them on external drives, drives on the same PC, a Windows home network, and other locations. We’ve shown you how to recover data like a forensics expert, but you can’t store recovered files back on your failed hard drive! There are lots of ways to transfer the files you access from an Ubuntu Live CD to a place that a stable Windows machine can access them. We’ll go through several methods, starting each section from the Ubuntu desktop – if you don’t yet have an Ubuntu Live CD, follow our guide to creating a bootable USB flash drive, and then our instructions for booting into Ubuntu. If your BIOS doesn’t let you boot using a USB flash drive, don’t worry, we’ve got you covered! Use a Healthy Hard Drive If your computer has more than one hard drive, or your hard drive is healthy and you’re in Ubuntu for non-recovery reasons, then accessing your hard drive is easy as pie, even if the hard drive is formatted for Windows. To access a hard drive, it must first be mounted. To mount a healthy hard drive, you just have to select it from the Places menu at the top-left of the screen. You will have to identify your hard drive by its size. Clicking on the appropriate hard drive mounts it, and opens it in a file browser. You can now move files to this hard drive by drag-and-drop or copy-and-paste, both of which are done the same way they’re done in Windows. Once a hard drive, or other external storage device, is mounted, it will show up in the /media directory. To see a list of currently mounted storage devices, navigate to /media by clicking on File System in a File Browser window, and then double-clicking on the media folder. Right now, our media folder contains links to the hard drive, which Ubuntu has assigned a terribly uninformative label, and the PLoP Boot Manager CD that is currently in the CD-ROM drive. Connect a USB Hard Drive or Flash Drive An external USB hard drive gives you the advantage of portability, and is still large enough to store an entire hard disk dump, if need be. Flash drives are also very quick and easy to connect, though they are limited in how much they can store. When you plug a USB hard drive or flash drive in, Ubuntu should automatically detect it and mount it. It may even open it in a File Browser automatically. Since it’s been mounted, you will also see it show up on the desktop, and in the /media folder. Once it’s been mounted, you can access it and store files on it like you would any other folder in Ubuntu. If, for whatever reason, it doesn’t mount automatically, click on Places in the top-left of your screen and select your USB device. If it does not show up in the Places list, then you may need to format your USB drive. To properly remove the USB drive when you’re done moving files, right click on the desktop icon or the folder in /media and select Safely Remove Drive. If you’re not given that option, then Eject or Unmount will effectively do the same thing. Connect to a Windows PC on your Local Network If you have another PC or a laptop connected through the same router (wired or wireless) then you can transfer files over the network relatively quickly. To do this, we will share one or more folders from the machine booted up with the Ubuntu Live CD over the network, letting our Windows PC grab the files contained in that folder. As an example, we’re going to share a folder on the desktop called ToShare. Right-click on the folder you want to share, and click Sharing Options. A Folder Sharing window will pop up. Check the box labeled Share this folder. A window will pop up about the sharing service. Click the Install service button. Some files will be downloaded, and then installed. When they’re done installing, you’ll be appropriately notified. You will be prompted to restart your session. Don’t worry, this won’t actually log you out, so go ahead and press the Restart session button. The Folder Sharing window returns, with Share this folder now checked. Edit the Share name if you’d like, and add checkmarks in the two checkboxes below the text fields. Click Create Share. Nautilus will ask your permission to add some permissions to the folder you want to share. Allow it to Add the permissions automatically. The folder is now shared, as evidenced by the new arrows above the folder’s icon. At this point, you are done with the Ubuntu machine. Head to your Windows PC, and open up Windows Explorer. Click on Network in the list on the left, and you should see a machine called UBUNTU in the right pane. Note: This example is shown in Windows 7; the same steps should work for Windows XP and Vista, but we have not tested them. Double-click on UBUNTU, and you will see the folder you shared earlier! As well as any other folders you’ve shared from Ubuntu. Double click on the folder you want to access, and from there, you can move the files from the machine booted with Ubuntu to your Windows PC. Upload to an Online Service There are many services online that will allow you to upload files, either temporarily or permanently. As long as you aren’t transferring an entire hard drive, these services should allow you to transfer your important files from the Ubuntu environment to any other machine with Internet access. We recommend compressing the files that you want to move, both to save a little bit of bandwidth, and to save time clicking on files, as uploading a single file will be much less work than a ton of little files. To compress one or more files or folders, select them, and then right-click on one of the members of the group. Click Compress…. Give the compressed file a suitable name, and then select a compression format. We’re using .zip because we can open it anywhere, and the compression rate is acceptable. Click Create and the compressed file will show up in the location selected in the Compress window. Dropbox If you have a Dropbox account, then you can easily upload files from the Ubuntu environment to Dropbox. There is no explicit limit on the size of file that can be uploaded to Dropbox, though a free account begins with a total limit of 2 GB of files in total. Access your account through Firefox, which can be opened by clicking on the Firefox logo to the right of the System menu at the top of the screen. Once into your account, press the Upload button on top of the main file list. Because Flash is not installed in the Live CD environment, you will have to switch to the basic uploader. Click Browse…find your compressed file, and then click Upload file. Depending on the size of the file, this could take some time. However, once the file has been uploaded, it should show up on any computer connected through Dropbox in a matter of minutes. Google Docs Google Docs allows the upload of any type of file – making it an ideal place to upload files that we want to access from another computer. While your total allocation of space varies (mine is around 7.5 GB), there is a per-file maximum of 1 GB. Log into Google Docs, and click on the Upload button at the top left of the page. Click Select files to upload and select your compressed file. For safety’s sake, uncheck the checkbox concerning converting files to Google Docs format, and then click Start upload. Go Online – Through FTP If you have access to an FTP server – perhaps through your web hosting company, or you’ve set up an FTP server on a different machine – you can easily access the FTP server in Ubuntu and transfer files. Just make sure you don’t go over your quota if you have one. You will need to know the address of the FTP server, as well as the login information. Click on Places > Connect to Server… Choose the FTP (with login) Service type, and fill in your information. Adding a bookmark is optional, but recommended. You will be asked for your password. You can choose to remember it until you logout, or indefinitely. You can now browse your FTP server just like any other folder. Drop files into the FTP server and you can retrieve them from any computer with an Internet connection and an FTP client. Conclusion While at first the Ubuntu Live CD environment may seem claustrophobic, it has a wealth of options for connecting to peripheral devices, local computers, and machines on the Internet – and this article has only scratched the surface. Whatever the storage medium, Ubuntu’s got an interface for it! Similar Articles Productive Geek Tips Backup Your Windows Live Writer SettingsMove a Window Without Clicking the Titlebar in UbuntuRecover Deleted Files on an NTFS Hard Drive from a Ubuntu Live CDCreate a Bootable Ubuntu USB Flash Drive the Easy WayReset Your Ubuntu Password Easily from the Live CD TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows Tech Fanboys Field Guide Check these Awesome Chrome Add-ons iFixit Offers Gadget Repair Manuals Online Vista style sidebar for Windows 7 Create Nice Charts With These Web Based Tools Track Daily Goals With 42Goals

    Read the article

  • Enterprise 2.0 Conference: Building Social Business

    - by kellsey.ruppel
    The way we work is changing rapidly, offering an enormous competitive advantage to those who embrace the new tools that enable contextual, agile and simplified information exchange and collaboration to distributed workforces and networks of partners and customers. As many of you are aware, Enterprise 2.0 is the term for the technologies and business practices that liberate the workforce from the constraints of legacy communication and productivity tools like email. It provides business managers with access to the right information at the right time through a web of inter-connected applications, services and devices. Enterprise 2.0 makes accessible the collective intelligence of many, translating to a huge competitive advantage in the form of increased innovation, productivity and agility. The Enterprise 2.0 Conference takes a strategic perspective, emphasizing the bigger picture implications of the technology and the exploration of what is at stake for organizations trying to change not only tools, but also culture and process. Beyond discussion of the "why", there will also be in-depth opportunities for learning the "how" that will help you bring Enterprise 2.0 to your business.You won't want to miss this opportunity to learn and hear from leading experts in the fields of technology for business, collaboration, culture change and collective intelligence. Oracle is a proud Gold sponsor of the Enterprise 2.0 Conference, taking place this week in Boston. Come and learn about Oracle at the following panel sessions and Market Leaders Theater Sessions. Tuesday, June 19, 2012 at 1:30 p.m. Market Theater Presentation Into the Activity Stream, and Beyond! Introducing Oracle Social Network Oracle Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Tuesday, June 19, 2012 at 2:30 p.m.  Panel Session Innovation versus Integration Oracle Panel Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Wednesday, June 20, 2012 at 1:30 p.m. Business Leadership Roundtable Oracle Panel Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Wednesday, June 20, 2012 at 3:00 p.m. Market Theater Presentation Into the Activity Stream, and Beyond! Introducing Oracle Social Network Oracle Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Thursday, June 21, 2012 at 8:30 a.m. Panel Session Collecting and Processing Big Data: Architecting Systems that Scale Oracle Panel Speaker: Ashok Joshi, Senior Director, Berkeley DB Development Thursday, June 21, 2012 at 11:00 a.m. Panel Session The Future of Big Data: What's Next Oracle Panel Speaker: Ashok Joshi, Senior Director, Berkeley DB Development Be sure to stop by and visit Oracle booth #501, to see live demonstrations of Oracle Social Network and Oracle WebCenter!

    Read the article

  • How do I disable unwanted iPXE boot attempt in Libvirt/qemu-kvm?

    - by gertvdijk
    Somehow after upgrading to 12.04, my virtual machines always boot with an attempt to boot from the network first. See this: while I don't have any PXE configuration set: I've tried: to disable SPICE, by changing the emulator to /usr/bin/kvm from /usr/bin/kvm-spice by editing the XML. Ctrl+B to configure the iPXE, but it doesn't let disable this as a boot option. setting another type of NIC - not an option, I need virtio for performance reasons. However, e1000e doesn't work either. removing the NIC: works. However, I need network. Googling around. Hard. Lots of result is about failing configured PXE boots. Not a big issue, but it increases boot times by 50-100% here (booting from SSD), so it's relatively long and annoys me. How can I disable this and boot from virtual hard disk directly?

    Read the article

< Previous Page | 302 303 304 305 306 307 308 309 310 311 312 313  | Next Page >