Search Results

Search found 2283 results on 92 pages for 'resume improvement'.

Page 46/92 | < Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >

  • C++ STL-conforming Allocators

    - by myahya
    What allocators are available out there for use with STL when dealing with small objects. I have already tried playing with pool allocators from Boost, but got no performance improvement (actually, in some cases there was considerable degradation).

    Read the article

  • Print directly to CUPS server from non-local clients (Ubuntu 14.04)

    - by OEP
    I set up a CUPS server with a few queues and printing from local clients (the CUPS test page and Samba) seems to work just fine. It seems like the CUPS server is denying non-local clients though: 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 390 Validate-Job successful-ok 130.127.48.70 - - [03/Jun/2014:14:29:19 -0400] "POST /printers/m137 HTTP/1.1" 200 339 Create-Job client-error-not-authorized localhost - - [03/Jun/2014:14:40:50 -0400] "POST /printers/m137 HTTP/1.1" 200 410869 Print-Job successful-ok This makes me think I have some sort of host-based restriction in my configuration file, but I can't find it. I've even set my default policy to Allow all only to get the same log message. I'm working from a configuration file which had previously worked on an older version of CUPS, which looks quite similar to the example cupsd.conf. I could be wrong but it looks like that final <Limit All> block ought to allow the actions the logs complain about. MaxLogSize 2000000000 # Log general information in error_log - change "info" to "debug" for # troubleshooting... LogLevel info #AccessLog syslog #ErrorLog syslog #PageLog syslog # Administrator user group... SystemGroup sys root lp # Only listen for connections from the local machine. Listen 0.0.0.0:631 Listen :::631 Listen /var/run/cups/cups.sock ServerName <snipped> # Show shared printers on the local network. Browsing Off BrowseOrder allow,deny # (Change '@LOCAL' to 'ALL' if using directed broadcasts from another subnet.) BrowseAllow @LOCAL # Default authentication type, when authentication is required... DefaultAuthType Basic # Restrict access to the server... <Location /> Order allow,deny Allow all </Location> # Restrict access to the admin pages... <Location /admin> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Restrict access to configuration files... <Location /admin/conf> AuthType Default Require user @SYSTEM Encryption Required Order allow,deny Allow all </Location> # Set the default printer/job policies... <Policy default> # Job-related operations must be done by the owner or an administrator... <Limit Send-Document Send-URI Hold-Job Release-Job Restart-Job Purge-Jobs Set-Job-Attributes Create-Job-Subscription Renew-Subscription Cancel-Subscription Get-Notifications Reprocess-Job Cancel-Current-Job Suspend-Current-Job Resume-Job CUPS-Move-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> # All administration operations require an administrator to authenticate... <Limit CUPS-Add-Modify-Printer CUPS-Delete-Printer CUPS-Add-Modify-Class CUPS-Delete-Class CUPS-Set-Default> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # All printer operations require a printer operator to authenticate... <Limit Pause-Printer Resume-Printer Enable-Printer Disable-Printer Pause-Printer-After-Current-Job Hold-New-Jobs Release-Held-New-Jobs Deactivate-Printer Activate-Printer Restart-Printer Shutdown-Printer Startup-Printer Promote-Job Schedule-Job-After CUPS-Accept-Jobs CUPS-Reject-Jobs> AuthType Default Require user @SYSTEM Order deny,allow </Limit> # Only the owner or an administrator can cancel or authenticate a job... <Limit Cancel-Job CUPS-Authenticate-Job> Require user @OWNER @SYSTEM Order deny,allow </Limit> <Limit All> Order allow,deny </Limit> </Policy>

    Read the article

  • Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite Now Available

    - by chung.wu
    Oracle Enterprise Manager 11g Application Management Suite for Oracle E-Business Suite is now available. The management suite combines features that were available in the standalone Application Management Pack for Oracle E-Business Suite and Application Change Management Pack for Oracle E-Business Suite with Oracle's market leading real user monitoring and configuration management capabilities to provide the most complete solution for managing E-Business Suite applications. The features that were available in the standalone management packs are now packaged into Oracle E-Business Suite Plug-in 4.0, which is now fully certified with Oracle Enterprise Manager 11g Grid Control. This latest plug-in extends Grid Control with E-Business Suite specific management capabilities and features enhanced change management support. In addition, this latest release of Application Management Suite for Oracle E-Business Suite also includes numerous real user monitoring improvements. General Enhancements This new release of Application Management Suite for Oracle E-Business Suite offers the following key capabilities: Oracle Enterprise Manager 11g Grid Control Support: All components of the management suite are certified with Oracle Enterprise Manager 11g Grid Control. Built-in Diagnostic Ability: This release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Discovery, Cloning, and User Monitoring that will validate if the appropriate patches, privileges, setups, and profile options have been configured. This feature improves the setup and configuration time to be up and operational. Lifecycle Automation Enhancements Application Management Suite for Oracle E-Business Suite provides a centralized view to monitor and orchestrate changes (both functional and technical) across multiple Oracle E-Business Suite systems. In this latest release, it provides even more control and flexibility in managing Oracle E-Business Suite changes.Change Management: Built-in Diagnostic Ability: This latest release has numerous major enhancements that provide the necessary intelligence to determine if the product has been installed and configured correctly. There are diagnostics for Customization Manager, Patch Manager, and Setup Manager that will validate if the appropriate patches, privileges, setups, and profile options have been configured. Enhancing the setup time and configuration time to be up and operational. Customization Manager: Multi-Node Custom Application Registration: This feature automates the process of registering and validating custom products/applications on every node in a multi-node EBS system. Public/Private File Source Mappings and E-Business Suite Mappings: File Source Mappings & E-Business Suite Mappings can be created and marked as public or private. Only the creator/owner can define/edit his/her own mappings. Users can use public mappings, but cannot edit or change settings. Test Checkout Command for Versions: This feature allows you to test/verify checkout commands at the version level within the File Source Mapping page. Prerequisite Patch Validation: You can specify prerequisite patches for Customization packages and for Release 12 Oracle E-Business Suite packages. Destination Path Population: You can now automatically populate the Destination Path for common file types during package construction. OAF File Type Support: Ability to package Oracle Application Framework (OAF) customizations and deploy them across multiple Oracle E-Business Suite instances. Extended PLL Support: Ability to distinguish between different types of PLLs (that is, Report and Forms PLL files). Providing better granularity when managing PLL objects. Enhanced Standard Checker: Provides greater and more comprehensive list of coding standards that are verified during the package build process (for example, File Driver exceptions, Java checks, XML checks, SQL checks, etc.) HTML Package Readme: The package Readme is in HTML format and includes the file listing. Advanced Package Search Capabilities: The ability to utilize more criteria within the advanced search package (that is, Public, Last Updated by, Files Source Mapping, and E-Business Suite Mapping). Enhanced Package Build Notifications: More detailed information on the results of a package build process. Better, more detailed troubleshooting guidance in the event of build failures. Patch Manager:Staged Patches: Ability to run Patch Manager with no external internet access. Customer can download Oracle E-Business Suite patches into a shared location for Patch Manager to access and apply. Supports highly secured production environments that prohibit external internet connections. Support for Superseded Patches: Automatic check for superseded patches. Allows users to easily add superseded patches into the Patch Run. More comprehensive and correct Patch Runs. Removes many manual and laborious tasks, frees up Apps DBAs for higher value-added tasks. Automatic Primary Node Identification: Users can now specify which is the "primary node" (that is, which node hosts the Shared APPL_TOP) during the Patch Run interview process, available for Release 12 only. Setup Manager:Preview Extract Results: Ability to execute an extract in "proof mode", and examine the query results, to determine accuracy. Used in conjunction with the "where" clause in Advanced Filtering. This feature can provide better and more accurate fine tuning of extracts. Use Uploaded Extracts in New Projects: Ability to incorporate uploaded extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access extracts that have been uploaded. Allows customer to reuse uploaded extracts to provision new instances. Re-use Existing (that is, historical) Extracts in New Projects: Ability to incorporate existing extracts in new projects via new LOV fields in package construction. Leverages the Setup Manager repository to access point-in-time extracts (snapshots) of configuration data. Allows customer to reuse existing extracts to provision new instances. Allows comparative historical reporting of identical APIs, executed at different times. Support for BR100 formats: Setup Manager can now automatically produce reports in the BR100 format. Native support for industry standard formats. Concurrent Manager API Support: General Foundation now provides an API for management of "Concurrent Manager" configuration data. Ability to migrate Concurrent Managers from one instance to another. Complete the setup once and never again; no need to redefine the Concurrent Managers. User Experience Management Enhancements Application Management Suite for Oracle E-Business Suite includes comprehensive capabilities for user experience management, supporting both real user and synthetic transaction based user monitoring techniques. This latest release of the management suite include numerous improvements in real user monitoring support. KPI Reporting: Configurable decimal precision for reporting of KPI and SLA values. By default, this is two decimal places. KPI numerator and denominator information. It is now possible to view KPI numerator and denominator information, and to have it available for export. Content Messages Processing: The application content message facility has been extended to distinguish between notifications and errors. In addition, it is now possible to specify matching rules that can be used to refine a selected content message specification. Note this is only available for XPath-based (not literal) message contents. Data Export: The Enriched data export facility has been significantly enhanced to provide improved performance and accessibility. Data is no longer stored within XML-based files, but is now stored within the Reporter database. However, it is possible to configure an alternative database for its storage. Access to the export data is through SQL. With this enhancement, it is now more easy than ever to use tools such as Oracle Business Intelligence Enterprise Edition to analyze correlated data collected from real user monitoring and business data sources. SNMP Traps for System Events: Previously, the SNMP notification facility was only available for KPI alerting. It has now been extended to support the generation of SNMP traps for system events, to provide external health monitoring of the RUEI system processes. Performance Improvements: Enhanced dashboard performance. The dashboard facility has been enhanced to support the parallel loading of items. In the case of dashboards containing large numbers of items, this can result in a significant performance improvement. Initial period selection within Data Browser and reports. The User Preferences facility has been extended to allow you to specify the initial period selection when first entering the Data Browser or reports facility. The default is the last hour. Performance improvement when querying the all sessions group. Technical Prerequisites, Download and Installation Instructions The Linux version of the plug-in is available for immediate download from Oracle Technology Network or Oracle eDelivery. For specific information regarding technical prerequisites, product download and installation, please refer to My Oracle Support note 1224313.1. The following certifications are in progress: * Oracle Solaris on SPARC (64-bit) (9, 10) * HP-UX Itanium (11.23, 11.31) * HP-UX PA-RISC (64-bit) (11.23, 11.31) * IBM AIX on Power Systems (64-bit) (5.3, 6.1)

    Read the article

  • Refreshing Your PC Won’t Help: Why Bloatware is Still a Problem on Windows 8

    - by Chris Hoffman
    Bloatware is still a big problem on new Windows 8 and 8.1 PCs. Some websites will tell you that you can easily get rid of manufacturer-installed bloatware with Windows 8′s Reset feature, but they’re generally wrong. This junk software often turns the process of powering on your new PC from what could be a delightful experience into a tedious slog, forcing you to spend hours cleaning up your new PC before you can enjoy it. Why Refreshing Your PC (Probably) Won’t Help Manufacturers install software along with Windows on their new PCs. In addition to hardware drivers that allow the PC’s hardware to work properly, they install more questionable things like trial antivirus software and other nagware. Much of this software runs at boot, cluttering the system tray and slowing down boot times, often dramatically. Software companies pay computer manufacturers to include this stuff. It’s installed to make the PC manufacturer money at the cost of making the Windows computer worse for actual users. Windows 8 includes “Refresh Your PC” and “Reset Your PC” features that allow Windows users to quickly get their computers back to a fresh state. It’s essentially a quick, streamlined way of reinstalling Windows.  If you install Windows 8 or 8.1 yourself, the Refresh operation will give your PC a clean Windows system without any additional third-party software. However, Microsoft allows computer manufacturers to customize their Refresh images. In other words, most computer manufacturers will build their drivers, bloatware, and other system customizations into the Refresh image. When you Refresh your computer, you’ll just get back to the factory-provided system complete with bloatware. It’s possible that some computer manufacturers aren’t building bloatware into their refresh images in this way. It’s also possible that, when Windows 8 came out, some computer manufacturer didn’t realize they could do this and that refreshing a new PC would strip the bloatware. However, on most Windows 8 and 8.1 PCs, you’ll probably see bloatware come back when you refresh your PC. It’s easy to understand how PC manufacturers do this. You can create your own Refresh images on Windows 8 and 8.1 with just a simple command, replacing Microsoft’s image with a customized one. Manufacturers can install their own refresh images in the same way. Microsoft doesn’t lock down the Refresh feature. Desktop Bloatware is Still Around, Even on Tablets! Not only is typical Windows desktop bloatware not gone, it has tagged along with Windows as it moves to new form factors. Every Windows tablet currently on the market — aside from Microsoft’s own Surface and Surface 2 tablets — runs on a standard Intel x86 chip. This means that every Windows 8 and 8.1 tablet you see in stores has a full desktop with the capability to run desktop software. Even if that tablet doesn’t come with a keyboard, it’s likely that the manufacturer has preinstalled bloatware on the tablet’s desktop. Yes, that means that your Windows tablet will be slower to boot and have less memory because junk and nagging software will be on its desktop and in its system tray. Microsoft considers tablets to be PCs, and PC manufacturers love installing their bloatware. If you pick up a Windows tablet, don’t be surprised if you have to deal with desktop bloatware on it. Microsoft Surfaces and Signature PCs Microsoft is now selling their own Surface PCs that they built themselves — they’re now a “devices and services” company after all, not a software company. One of the nice things about Microsoft’s Surface PCs is that they’re free of the typical bloatware. Microsoft won’t take money from Norton to include nagging software that worsens the experience. If you pick up a Surface device that provides Windows 8.1 and 8 as Microsoft intended it — or install a fresh Windows 8.1 or 8 system — you won’t see any bloatware. Microsoft is also continuing their Signature program. New PCs purchased from Microsoft’s official stores are considered “Signature PCs” and don’t have the typical bloatware. For example, the same laptop could be full of bloatware in a traditional computer store and clean, without the nasty bloatware when purchased from a Microsoft Store. Microsoft will also continue to charge you $99 if you want them to remove your computer’s bloatware for you — that’s the more questionable part of the Signature program. Windows 8 App Bloatware is an Improvement There’s a new type of bloatware on new Windows 8 systems, which is thankfully less harmful. This is bloatware in the form of included “Windows 8-style”, “Store-style”, or “Modern” apps in the new, tiled interface. For example, Amazon may pay a computer manufacturer to include the Amazon Kindle app from the Windows Store. (The manufacturer may also just receive a cut of book sales for including it. We’re not sure how the revenue sharing works — but it’s clear PC manufacturers are getting money from Amazon.) The manufacturer will then install the Amazon Kindle app from the Windows Store by default. This included software is technically some amount of clutter, but it doesn’t cause the problems older types of bloatware does. It won’t automatically load and delay your computer’s startup process, clutter your system tray, or take up memory while you’re using your computer. For this reason, a shift to including new-style apps as bloatware is a definite improvement over older styles of bloatware. Unfortunately, this type of bloatware has not replaced traditional desktop bloatware, and new Windows PCs will generally have both. Windows RT is Immune to Typical Bloatware, But… Microsoft’s Windows RT can’t run Microsoft desktop software, so it’s immune to traditional bloatware. Just as you can’t install your own desktop programs on it, the Windows RT device’s manufacturer can’t install their own desktop bloatware. While Windows RT could be an antidote to bloatware, this advantage comes at the cost of being able to install any type of desktop software at all. Windows RT has also seemingly failed — while a variety of manufacturers came out with their own Windows RT devices when Windows 8 was first released, they’ve all since been withdrawn from the market. Manufacturers who created Windows RT devices have criticized it in the media and stated they have no plans to produce any future Windows RT devices. The only Windows RT devices still on the market are Microsoft’s Surface (originally named Surface RT) and Surface 2. Nokia is also coming out with their own Windows RT tablet, but they’re in the process of being purchased by Microsoft. In other words, Windows RT just isn’t a factor when it comes to bloatware — you wouldn’t get a Windows RT device unless you purchased a Surface, but those wouldn’t come with bloatware anyway. Removing Bloatware or Reinstalling Windows 8.1 While bloatware is still a problem on new Windows systems and the Refresh option probably won’t help you, you can still eliminate bloatware in the traditional way. Bloatware can be uninstalled from the Windows Control Panel or with a dedicated removal tool like PC Decrapifier, which tries to automatically uninstall the junk for you. You can also do what Windows geeks have always tended to do with new computers — reinstall Windows 8 or 8.1 from scratch with installation media from Microsoft. You’ll get a clean Windows system and you can install only the hardware drivers and other software you need. Unfortunately, bloatware is still a big problem for Windows PCs. Windows 8 tries to do some things to address bloatware, but it ultimately comes up short. Most Windows PCs sold in most stores to most people will still have the typical bloatware slowing down the boot process, wasting memory, and adding clutter. Image Credit: LG on Flickr, Intel Free Press on Flickr, Wilson Hui on Flickr, Intel Free Press on Flickr, Vernon Chan on Flickr     

    Read the article

  • MSCC: Scripting - Administrator's­ toolbox of magic...

    Finally, we made it to have our April meetup - in May. The most obvious explanation is the increased amount of open source and IT activities that either the MSCC, the Linux User Group of Mauritius (LUGM), or the University of Mauritius Student's Computer Club is organising. It's absolutely incredible to see the recent hype of events here on the island. And I'm loving it! Unfortunately, we also had to deal with arranging for a location this time. It was kind of an odyssey as my requests (and phone calls) haven't been answered, even though I tried it several times - well, kind of disappointing and I have to look into that for future gatherings. In my opinion, it is essential that two parameters of a community meeting are fixed as early as possible: Location, and Date and time You can't just change one or both on the very last minute. Well, this time we had to do it due to unforeseen reasons, and I apologise to any MSCC member which couldn't make it to our April meetup. Okay, lesson learned but now back to the actual meetup report ... Shortly after the meeting I placed the following statement as my first impression: "Spontaneous and improvised :) No, seriously, Ish and Dan had well prepared presentations on shell scripting, mainly focused towards Bourne Again Shell (bash), and the pros and cons of scripting versus actually writing something in a decent programming language. I thought that I could cut myself out of the equation but the demand for information about PowerShell was higher than expected..." Well, it turned out that the interest in Windows PowerShell was high, as I even got a couple of questions on it via social media networks during the evening. I also like to mention that the number of attendees went back to what I would call a "standard" number of participation. This time there were 12 craftsmen, but again a good number of First Timers. Reactions of other attendees Here are some impressions and feedback from our participants: "Enjoyed the bash and powershell (linux / windows) presentations ..." -- Nadim on event comments "He [Daniel] also showed us some syntax loopholes in Bash that could leave someone with bad code." -- Ish on MSCC – Let's talk about Scripting   Glad to see a couple of first time attendees, especially students from the university itself. Some details on the presentations MSCC: First time visit at the University of Mauritius - Phase II Engineering Tower, room 2.9 Gimme some love ... bash and other shells Ish gave a great introduction into shell scripting as he spoke about existing shell environments and a little bit about their history. Furthermore, he talked about various built-in commands, the use of coreutils, the ability to daisy-chain multiple commands using pipes, the importance of the standard I/O streams and their file descriptors in advanced scripting techniques. Combined with a couple of sample statements in the Linux terminal on Ubuntu 14.04 machine it was a solid presentation. Have a closer look at his slides - published on his blog on MSCC – Let's talk about Scripting. Oddities of scripting After the brief introduction into bash it was Daniel's turn to highlight a good number of oddities when working with shell scripts. First of all, it should be clear that scripting is not supposed for any kind of implementations in terms of software but simply to automate administrative procedures and to simplify routine jobs on a system. One of the cool oddities that he mentioned is that everything (!) in a shell is represented by strings; there are no other types like integer, float, date-time, etc. that you'd like to use in a full-fledged programming language. Let's have a look at his sample:  more to come... What's the output? As a conclusion, Daniel suggests that shell scripting should be limited but not restricted to automatic repetitive command stacks and batch jobs, startup wrapper for applications in order to set up the execution environment, and other not too sophisticated jobs. But as soon as it might involve a little bit more logic or you might rely on performance it's better to write an application in Ruby, Python, or Perl (among others of course). This is also enables the possibility to test your code properly. MSCC: Ish talking about Bourne Again Shell (bash) and shell scripting to automate regular tasks MSCC: Daniel gives an overview about the pros and cons of shell scripting versus programming MSCC: PowerShell as your scripting solution on Windows operating systems The path of the Enlightened is long ... and tough. Honestly, even though PowerShell was mentioned without any further details on the meetup's agenda, I didn't expect that there would be demand to give a presentation on Microsoft PowerShell after all. I already took this topic out of the announcement but the audience wanted to have some information. Okay, then let's see what I could do - improvised style. While my machine booted and got hooked up to the projector, I started to talk about the beginnings of PowerShell from back in 2006, and its predecessors MS DOS and Command Prompt. A throwback in history... always good for young people. As usual, Microsoft didn't get it at that time. Instead of listening to their client's needs and demands they ignored the feasibility to administrate Windows server farms without any UI tools. PowerShell is actually a result of this, and seeing that shell scripting is a common, reliable and fast way in an administrator's toolbox for decades, Microsoft had to adapt from their Microsoft Management Console (MMC) to a broader approach. It's not like shell scripting was something new; it is in daily use by alternative operating systems like AIX, HP UX, Solaris, and last but not least Linux. Most interestingly, Microsoft is very good at renovating existing architectures, and over the years PowerShell not only replaced their own combination of Command Prompt and Scripting Hosts (VBScript and CScript) but really turned into a challenging competitor on the market. The shell is easy to extend with cmdlets, and open to other Microsoft products like SQL Server, SharePoint, as well as Third-party software applications. Similar to MMC PowerShell also offers the ability to administer other machine remotely - only without a graphical user interface and therefore it's easier to automate and schedule regular tasks. Following is a sample of a PowerShell script file (extension .ps1): $strComputer = "." $colItems = get-wmiobject -class Win32_BIOS -namespace root\CIMV2 -comp $strComputer foreach ($objItem in $colItems) {write-host "BIOS Characteristics: " $objItem.BiosCharacteristicswrite-host "BIOS Version: " $objItem.BIOSVersionwrite-host "Build Number: " $objItem.BuildNumberwrite-host "Caption: " $objItem.Captionwrite-host "Code Set: " $objItem.CodeSetwrite-host "Current Language: " $objItem.CurrentLanguagewrite-host "Description: " $objItem.Descriptionwrite-host "Identification Code: " $objItem.IdentificationCodewrite-host "Installable Languages: " $objItem.InstallableLanguageswrite-host "Installation Date: " $objItem.InstallDatewrite-host "Language Edition: " $objItem.LanguageEditionwrite-host "List Of Languages: " $objItem.ListOfLanguageswrite-host "Manufacturer: " $objItem.Manufacturerwrite-host "Name: " $objItem.Namewrite-host "Other Target Operating System: " $objItem.OtherTargetOSwrite-host "Primary BIOS: " $objItem.PrimaryBIOSwrite-host "Release Date: " $objItem.ReleaseDatewrite-host "Serial Number: " $objItem.SerialNumberwrite-host "SMBIOS BIOS Version: " $objItem.SMBIOSBIOSVersionwrite-host "SMBIOS Major Version: " $objItem.SMBIOSMajorVersionwrite-host "SMBIOS Minor Version: " $objItem.SMBIOSMinorVersionwrite-host "SMBIOS Present: " $objItem.SMBIOSPresentwrite-host "Software Element ID: " $objItem.SoftwareElementIDwrite-host "Software Element State: " $objItem.SoftwareElementStatewrite-host "Status: " $objItem.Statuswrite-host "Target Operating System: " $objItem.TargetOperatingSystemwrite-host "Version: " $objItem.Versionwrite-host} Which gives you information about your BIOS and Windows OS. Then change the computer name to another one on your network (NetBIOS based) and run the script again. There lots of samples and tutorials at the Microsoft Script Center, and I would advise you to pay a visit over there if you are more interested in PowerShell. The Script Center provides the download links, too. Upcoming Events What are the upcoming events here in Mauritius? So far, we have the following ones (incomplete list as usual) in chronological order: Hacking Defence (14. May 2014) WebCup Maurice (7. & 8. June 2014) Developers Conference (TBA ~ July 2014) Linuxfest 2014 (TBA ~ November 2014) Hopefully, there will be more announcements during the next couple of weeks and months. If you know about any other event, like a bootcamp, a code challenge or hackathon here in Mauritius, please drop me a note in the comment section below this article. Thanks! My resume of the day Spontaneous and improvised :) The new location at the University of Mauritius turned out very well, there is plenty of space, and it could be a good choice for future meetings. Especially, having the ability to get more and more students into our IT community sounds like a great opportunity. Later during the day, I got some promising mails from Nadim regarding future sessions at the local branch of the Middlesex University. Well, we will see in the future... But for now this will be on hold until approximately October when students resume their regular studies. Anyway, it was a good experience at the university, and thanks again to the UoM Student's Computer Club that made the necessary arrangements for the MSCC!

    Read the article

  • Visiting the Emtel Data Centre

    Back in February at the first event of the Emtel Knowledge Series (EKS) I spoke to various people at Emtel about their data centre here on the island. I was trying to see whether it would be possible to arrange a meeting over there for a selected group of our community members. Well, let's say it like this... My first approach wasn't that promising and far from successful but during the following months there were more and more occasions to get in touch with the "right" contact persons at Emtel to make it happen... Setting up an appointment and pre-requisites The major improvement came during a Boot Camp for Windows Phone 8.1 App development organised by Microsoft Indian Ocean Islands in cooperation with Emtel at the Emtel World, Ebene. Apart from learning bits and pieces regarding Universal Apps I took the opportunity to get in touch with Arvin Lockee, Sales Executive - Data, during our lunch break. And this really kicked off the whole procedure. Prior to get access to the Emtel data centre it is requested that you provide full name and National ID of anyone going to visit. Also, it should be noted that there was only a limited amount of seats available. Anyways, packed with this information I posted through the usual social media channels. Responses came in very quickly and based on First-come, first-serve (FCFS) principle I noted down the details and forwarded them to Emtel in order to fix a date and time for the visit. In preparation on our side, all attendees exchanged contact details and we organised transport options to go to the data centre in Arsenal. The day before and on the day of our meeting, Arvin send me a reminder to check whether everything is still confirmed and ready to go... Of course, it was! Arriving at the Emtel Data Centre As I'm coming from Flic En Flac towards the North, we agreed that I'm going to pick up a couple of young fellows near the old post office in Port Louis. All went well, except that Sean eventually might be living in another time zone compared to the rest of us. Anyway, after some extended stop we were complete and arrived just in time in Arsenal to meet and greet with Ish and Veer. Again, Emtel is taking access procedures to their data centre very serious and the gate stayed close until all our IDs had been noted and compared to the list of registered attendees. Despite having a good laugh at the mixture of old and new ID cards it was a straight-forward processing. The ward was very helpful and guided us to the waiting area at the entrance section of the building. Shortly after we were welcomed by Kamlesh Bokhoree, the Data Centre Officer. He gave us brief introduction into the rules and regulations during our visit, like no photography allowed, not touching the buttons, and following his instructions through the whole visit. Of course! Inside the data centre Next, he explained us the multi-factor authentication system using a combination of bio-metric data, like finger print reader, and "classic" pin panel. The Emtel data centre provides multiple services and next to co-location for your own hardware they also offer storage options for your backup and archive data in their massive, fire-resistant vault. Very impressive to get to know about the considerations that have been done in choosing the right location and how to set up the whole premises. It should also be noted that there is 24/7 CCTV surveillance inside and outside the buildings. Strengths of the Emtel TIER 3 Data Centre, Mauritius Finally, we were guided into the first server room. And wow, the whole setup is cleverly planned and outlined in the architecture. From the false floor and ceilings in order to provide optimum air flow, over to the separation of cold and hot aisles between the full-size server racks, and of course the monitored air conditions in order to analyse and watch changes in temperature, smoke detection and other parameters. And not surprisingly everything has been implemented in two independent circuits. There is a standardised classification for the construction and operation of data centres world-wide, and the Emtel's one has been designed to be a TIER 4 building but due to the lack of an alternative power supplier on the island it is officially registered as a TIER 3 compliant data centre. Maybe in the long run there might be a second supplier of energy next to CEB... time will tell. Luckily, the data centre is integrated into the National Fibre Optic Gigabit Ring and Emtel already connects internationally through diverse undersea cable routes like SAFE & LION/LION2 out of Mauritius and through several other providers for onwards connectivity. The data centre is part of the National Fibre Optic Gigabit Ring and has redundant internet connectivity onwards. Meanwhile, Arvin managed to join our little group of geeks and he supported Kamlesh in answering our technical questions regarding the capacities and general operation of the data centre. Visiting the NOC and its dedicated team of IT professionals was surely one of the visual highlights. Seeing their wall of screens to monitor any kind of activities on the data lines, the managed servers and the activity in and around the building was great. Even though I'm using a multi-head setup since years I cannot keep it up with that setup... ;-) But I got a couple of ideas on how to improve my work spaces here at the office. Clear advantages of hosting your e-commerce and mobile backends locally After the completely isolated NOC area we continued our Q&A session with Kamlesh and Arvin in the second server room which is dedictated to shared environments. On first thought it should be well-noted that there is lots of space for full-sized racks and therefore co-location of your own hardware. Actually, given the feedback that there will be upcoming changes in prices the facilities at the Emtel data centre are getting more and more competitive and interesting for local companies, especially small and medium enterprises. After seeing this world-class infrastructure available on the island, I'm already considering of moving one of my root servers abroad to be co-located here on the island. This would provide an improved user experience in terms of site performance and latency. This would be a good improvement, especially for upcoming e-commerce solutions for two of my local clients. Later on, we actually started the conversation of additional services that could be a catalyst for the local market in order to attract more small and medium companies to take the data centre into their evaluations regarding online activities. Until today Emtel does not provide virtualised server environments but there might be ongoing plans in the future to cover this field as well. Emtel is a mobile operator and internet connectivity provider in the first place, entering a market of managed and virtualised server infrastructures including capacities in terms of cloud storage and computing are rather new and there is a continuous learning curve at Emtel, too. You cannot just jump into a new market and see how it works out... And I appreciate Emtel's approach towards a solid fundament and then building new services on top of that. Emtel as a future one-stop-shop service provider for all your internet and telecommunications needs. Emtel's promotional video about their TIER 3 data centre in Arsenal, Mauritius More details are thoroughly described in Emtel's brochure of their data centre. Check out their PDF document here. Thanks for this opportunity Visiting and walking through the Emtel data centre for more than 2 hours was a great experience. As representative of the Mauritius Software Craftsmanship Community (MSCC) I would like to thank anyone at Emtel involved in the process of making it happen, and especially to Arvin Lockee and Kamlesh Bokhoree for their time and patience in explaining the infrastructure and answering all the endless questions from our members. Thank You!

    Read the article

  • CodePlex Daily Summary for Sunday, June 01, 2014

    CodePlex Daily Summary for Sunday, June 01, 2014Popular ReleasesSandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Tooltip Web Preview: ToolTip Web Preview: Version 1.0Database Helper: Release 1.0.0.0: First Release of Database HelperCoMaSy: CoMaSy1.0.2: !Contact Management SystemImage View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.babelua: V1.5.6.0: V1.5.6.0 - 2014.5.30New feature: support quick-cocos2d-x project now; support text search in scripts folder now, you can use this function in Search Result Window;SEToolbox: 01.032.014 Release 1: Added fix when loading game Textures for icons causing 'Unable to read beyond the end of the stream'. Added new Resource Report, that displays all in game resources in a concise report. Added in temp directory cleaner, to keep excess files from building up. Fixed use of colors on the windows, to work better with desktop schemes. Adding base support for multilingual resources. This will allow loading of the Space Engineers resources to show localized names, and display localized date a...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Fancontroller: Fancontroller: Initial releaseVi-AIO SearchBar: Vi – AIO Search Bar: Version 1.0Top Verses ( Ayat Emas ): Binary Top Verses: This one is the bin folder of the component. the .dll component is inside.Traditional Calendar Component: Traditional Calender Converter: Duta Wacana Christian University This file containing Traditional Calendar Component and Demo Aplication that using Traditional Calendar Component. This component made with .NET Framework 4 and the programming language is C# .SQLSetupHelper: 1.0.0.0: First Stable Version of SQL SetupComposite Iconote: Composite Iconote: This is a composite has been made by Microsoft Visual Studio 2013. Requirement: To develop this composite or use this component in your application, your computer must have .NET framework 4.5 or newer.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksToolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesNew ProjectsCet MicroWPF: WPF-like library for simple graphic-UI application using Netduino (Plus) 2 and the FTDI FT800 Eve board.Fakemons: Some Fakmons, powered by XML, XSLT, CSS and JavascriptFling OS: Fling OS is a C# operating system project aiming to create a new, managed operating system from the ground up.MudRoom: Experimental tool sets in mud parsing and area definitionOOP-2112110158: My name's NgocDungRoslynResearch: Roslyn ResearchTHD - Control de Usuarios: control de usuarios y permisosWPF Kinect User Controls: WPF Kinect User Control project provide simple Tilt and Skeleton Tracking Parameter Controls.

    Read the article

  • C# Performance Pitfall – Interop Scenarios Change the Rules

    - by Reed
    C# and .NET, overall, really do have fantastic performance in my opinion.  That being said, the performance characteristics dramatically differ from native programming, and take some relearning if you’re used to doing performance optimization in most other languages, especially C, C++, and similar.  However, there are times when revisiting tricks learned in native code play a critical role in performance optimization in C#. I recently ran across a nasty scenario that illustrated to me how dangerous following any fixed rules for optimization can be… The rules in C# when optimizing code are very different than C or C++.  Often, they’re exactly backwards.  For example, in C and C++, lifting a variable out of loops in order to avoid memory allocations often can have huge advantages.  If some function within a call graph is allocating memory dynamically, and that gets called in a loop, it can dramatically slow down a routine. This can be a tricky bottleneck to track down, even with a profiler.  Looking at the memory allocation graph is usually the key for spotting this routine, as it’s often “hidden” deep in call graph.  For example, while optimizing some of my scientific routines, I ran into a situation where I had a loop similar to: for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i]); } .csharpcode, .csharpcode pre { font-size: small; color: black; font-family: consolas, "Courier New", courier, monospace; background-color: #ffffff; /*white-space: pre;*/ } .csharpcode pre { margin: 0em; } .csharpcode .rem { color: #008000; } .csharpcode .kwrd { color: #0000ff; } .csharpcode .str { color: #006080; } .csharpcode .op { color: #0000c0; } .csharpcode .preproc { color: #cc6633; } .csharpcode .asp { background-color: #ffff00; } .csharpcode .html { color: #800000; } .csharpcode .attr { color: #ff0000; } .csharpcode .alt { background-color: #f4f4f4; width: 100%; margin: 0em; } .csharpcode .lnum { color: #606060; } This loop was at a fairly high level in the call graph, and often could take many hours to complete, depending on the input data.  As such, any performance optimization we could achieve would be greatly appreciated by our users. After a fair bit of profiling, I noticed that a couple of function calls down the call graph (inside of ProcessElement), there was some code that effectively was doing: // Allocate some data required DataStructure* data = new DataStructure(num); // Call into a subroutine that passed around and manipulated this data highly CallSubroutine(data); // Read and use some values from here double values = data->Foo; // Cleanup delete data; // ... return bar; Normally, if “DataStructure” was a simple data type, I could just allocate it on the stack.  However, it’s constructor, internally, allocated it’s own memory using new, so this wouldn’t eliminate the problem.  In this case, however, I could change the call signatures to allow the pointer to the data structure to be passed into ProcessElement and through the call graph, allowing the inner routine to reuse the same “data” memory instead of allocating.  At the highest level, my code effectively changed to something like: DataStructure* data = new DataStructure(numberToProcess); for (i=0; i<numberToProcess; ++i) { // Do some work ProcessElement(element[i], data); } delete data; Granted, this dramatically reduced the maintainability of the code, so it wasn’t something I wanted to do unless there was a significant benefit.  In this case, after profiling the new version, I found that it increased the overall performance dramatically – my main test case went from 35 minutes runtime down to 21 minutes.  This was such a significant improvement, I felt it was worth the reduction in maintainability. In C and C++, it’s generally a good idea (for performance) to: Reduce the number of memory allocations as much as possible, Use fewer, larger memory allocations instead of many smaller ones, and Allocate as high up the call stack as possible, and reuse memory I’ve seen many people try to make similar optimizations in C# code.  For good or bad, this is typically not a good idea.  The garbage collector in .NET completely changes the rules here. In C#, reallocating memory in a loop is not always a bad idea.  In this scenario, for example, I may have been much better off leaving the original code alone.  The reason for this is the garbage collector.  The GC in .NET is incredibly effective, and leaving the allocation deep inside the call stack has some huge advantages.  First and foremost, it tends to make the code more maintainable – passing around object references tends to couple the methods together more than necessary, and overall increase the complexity of the code.  This is something that should be avoided unless there is a significant reason.  Second, (unlike C and C++) memory allocation of a single object in C# is normally cheap and fast.  Finally, and most critically, there is a large advantage to having short lived objects.  If you lift a variable out of the loop and reuse the memory, its much more likely that object will get promoted to Gen1 (or worse, Gen2).  This can cause expensive compaction operations to be required, and also lead to (at least temporary) memory fragmentation as well as more costly collections later. As such, I’ve found that it’s often (though not always) faster to leave memory allocations where you’d naturally place them – deep inside of the call graph, inside of the loops.  This causes the objects to stay very short lived, which in turn increases the efficiency of the garbage collector, and can dramatically improve the overall performance of the routine as a whole. In C#, I tend to: Keep variable declarations in the tightest scope possible Declare and allocate objects at usage While this tends to cause some of the same goals (reducing unnecessary allocations, etc), the goal here is a bit different – it’s about keeping the objects rooted for as little time as possible in order to (attempt) to keep them completely in Gen0, or worst case, Gen1.  It also has the huge advantage of keeping the code very maintainable – objects are used and “released” as soon as possible, which keeps the code very clean.  It does, however, often have the side effect of causing more allocations to occur, but keeping the objects rooted for a much shorter time. Now – nowhere here am I suggesting that these rules are hard, fast rules that are always true.  That being said, my time spent optimizing over the years encourages me to naturally write code that follows the above guidelines, then profile and adjust as necessary.  In my current project, however, I ran across one of those nasty little pitfalls that’s something to keep in mind – interop changes the rules. In this case, I was dealing with an API that, internally, used some COM objects.  In this case, these COM objects were leading to native allocations (most likely C++) occurring in a loop deep in my call graph.  Even though I was writing nice, clean managed code, the normal managed code rules for performance no longer apply.  After profiling to find the bottleneck in my code, I realized that my inner loop, a innocuous looking block of C# code, was effectively causing a set of native memory allocations in every iteration.  This required going back to a “native programming” mindset for optimization.  Lifting these variables and reusing them took a 1:10 routine down to 0:20 – again, a very worthwhile improvement. Overall, the lessons here are: Always profile if you suspect a performance problem – don’t assume any rule is correct, or any code is efficient just because it looks like it should be Remember to check memory allocations when profiling, not just CPU cycles Interop scenarios often cause managed code to act very differently than “normal” managed code. Native code can be hidden very cleverly inside of managed wrappers

    Read the article

  • People, Process & Engagement: WebCenter Partner Keste

    - by Michael Snow
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Within the WebCenter group here at Oracle, discussions about people, process and engagement cross over many vertical industries and products. Amidst our growing partner ecosystem, the community provides us insight into great customer use cases every day. Such is the case with our partner, Keste, who provides us a guest post on our blog today with an overview of their innovative solution for a customer in the transportation industry. Keste is an Oracle software solutions and development company headquartered in Dallas, Texas. As a Platinum member of the Oracle® PartnerNetwork, Keste designs, develops and deploys custom solutions that automate complex business processes. Seamless Customer Self-Service Experience in the Trucking Industry with Oracle WebCenter Portal  Keste, Oracle Platinum Partner Customer Overview Omnitracs, Inc., a Qualcomm company provides mobility solutions for trucking fleets to companies in the transportation industry. Omnitracs’ mobility services include basic communications such as text as well as advanced monitoring services such as GPS tracking, temperature tracking of perishable goods, load tracking and weighting distribution, and many others. Customer Business Needs Already the leading provider of mobility solutions for large trucking fleets, they chose to target smaller trucking fleets as new customers. However their existing high-touch customer support method would not be a cost effective or scalable method to manage and service these smaller customers. Omnitracs needed to provide several self-service features to make customer support more scalable while keeping customer satisfaction levels high and the costs manageable. The solution also had to be very intuitive and easy to use. The systems that Omnitracs sells to these trucking customers require professional installation and smaller customers need to track and schedule the installation. Information captured in Oracle eBusiness Suite needed to be readily available for new customers to track these purchases and delivery details. Omnitracs wanted a high impact User Interface to significantly improve customer experience with the ability to integrate with EBS, provisioning systems as well as CRM systems that were already implemented. Omnitracs also wanted to build an architecture platform that could potentially be extended to other Portals. Omnitracs’ stated goal was to deliver an “eBay-like” or “Amazon-like” experience for all of their customers so that they could reach a much broader market beyond their large company customer base. Solution Overview In order to manage the increased complexity, the growing support needs of global customers and improve overall product time-to-market in a cost-effective manner, IT began to deliver a self-service model. This self service model not only transformed numerous business processes but is also allowing the business to keep up with the growing demands of the (internal and external) customers. This solution was a customer service Portal that provided self service capabilities for large and small customers alike for Activation of mobility products, managing add-on applications for the devices (much like the Apple App Store), transferring services when trucks are sold to other companies as well as deactivation all without the involvement of a call service agent or sending multiple emails to different Omnitracs contacts. This is a conceptual view of the Customer Portal showing the details of the components that make up the solution. 12.00 The portal application for transactions was entirely built using ADF 11g R2. Omnitracs’ business had a pressing requirement to have a portal available 24/7 for its customers. Since there were interactions with EBS in the back-end, the downtimes on the EBS would negate this availability. Omnitracs devised a decoupling strategy at the database side for the EBS data. The decoupling of the database was done using Oracle Data Guard and completely insulated the solution from any eBusiness Suite down time. The customer has no knowledge whether eBS is running or not. Here are two sample screenshots of the portal application built in Oracle ADF. Customer Benefits The Customer Portal not only provided the scalability to grow the business but also provided the seamless integration with other disparate applications. Some of the key benefits are: Improved Customer Experience: With a modern look and feel and a Portal that has the aspects of an App Store, the customer experience was significantly improved. Page response times went from several seconds to sub-second for all of the pages. Enabled new product launches: After successfully dominating the large fleet market, Omnitracs now has a scalable solution to sell and manage smaller fleet customers giving them a huge advantage over their nearest competitors. Dozens of new customers have been acquired via this portal through an onboarding process that now takes minutes Seamless Integrations Improves Customer Support: ADF 11gR2 allowed Omnitracs to bring a diverse list of applications into one integrated solution. This provided a seamless experience for customers to route them from Marketing focused application to a customer-oriented portal. Internally, it also allowed Sales Representatives to have an integrated flow for taking a prospect through the various steps to onboard them as a customer. Key integrations included: Unity Core Salesforce.com Merchant e-Solution for credit card Custom Omnitracs Applications like CUPS and AUTO Security utilizing OID and OVD Back end integration with EBS (Data Guard) and iQ Database Business Impact Significant business impacts were realized through the launch of customer portal. It not only allows the business to push through in underserved segments, but also reduces the time it needs to spend on customer support—allowing the business to focus more on sales and identifying the market for new products. Some of the Immediate Benefits are The entire onboarding process is now completely automated and now completes in minutes. This represents an 85% productivity improvement over their previous processes. And it was 160 times faster! With the success of this self-service solution, the business is now targeting about 3X customer growth in the next five years. This represents a tripling of their overall customer base and significant downstream revenue for the ongoing services. 90%+ improvement of customer onboarding and management process by utilizing, single sign on integration using OID/OAM solution, performance improvements and new self-service functionality Unified login for all Customers, Partners and Internal Users enables login to a common portal and seamless access to all other integrated applications targeted at the respective audience Significantly improved customer experience with a better look and feel with a more user experience focused Portal screens. Helped sales of the new product by having an easy way of ordering and activating the product. Data Guard helped increase availability of the Portal to 99%+ and make it independent of EBS downtime. This gave customers the feel of high availability of the portal application. Some of the anticipated longer term Benefits are: Platform that can be leveraged to launch any new product introduction and enable all product teams to reach new customers and new markets Easy integration with content management to allow business owners more control of the product catalog Overall reduced TCO with standardization of the Oracle platform Managed IT support cost savings through optimization of technology skills needed to support and modify this solution ------------------------------------------------------------ 12.00 Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 -"/ /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-family:"Times New Roman","serif";}

    Read the article

  • External File Upload Optimizations for Windows Azure

    - by rgillen
    [Cross posted from here: http://rob.gillenfamily.net/post/External-File-Upload-Optimizations-for-Windows-Azure.aspx] I’m wrapping up a bit of the work we’ve been doing on data movement optimizations for cloud computing and the latest set of data yielded some interesting points I thought I’d share. The work done here is not really rocket science but may, in some ways, be slightly counter-intuitive and therefore seemed worthy of posting. Summary: for those who don’t like to read detailed posts or don’t have time, the synopsis is that if you are uploading data to Azure, block your data (even down to 1MB) and upload in parallel. Set your block size based on your source file size, but if you must choose a fixed value, use 1MB. Following the above will result in significant performance gains… upwards of 10x-24x and a reduction in overall file transfer time of upwards of 90% (eg, uploading a 1GB file averaged 46.37 minutes prior to optimizations and averaged 1.86 minutes afterwards). Detail: For those of you who want more detail, or think that the claims at the end of the preceding paragraph are over-reaching, what follows is information and code supporting these claims. As the title would indicate, these tests were run from our research facility pointing to the Azure cloud (specifically US North Central as it is physically closest to us) and do not represent intra-cloud results… we have performed intra-cloud tests and the overall results are similar in notion but the data rates are significantly different as well as the tipping points for the various block sizes… this will be detailed separately). We started by building a very simple console application that would loop through a directory and upload each file to Azure storage. This application used the shipping storage client library from the 1.1 version of the azure tools. The only real variation from the client library is that we added code to collect and record the duration (in ms) and size (in bytes) for each file transferred. The code is available here. We then created a directory that had a collection of files for the following sizes: 2KB, 32KB, 64KB, 128KB, 512KB, 1MB, 5MB, 10MB, 25MB, 50MB, 100MB, 250MB, 500MB, 750MB, and 1GB (50 files for each size listed). These files contained randomly-generated binary data and do not benefit from compression (a separate discussion topic). Our file generation tool is available here. The baseline was established by running the application described above against the directory containing all of the data files. This application uploads the files in a random order so as to avoid transferring all of the files of a given size sequentially and thereby spreading the affects of periodic Internet delays across the collection of results.  We then ran some scripts to split the resulting data and generate some reports. The raw data collected for our non-optimized tests is available via the links in the Related Resources section at the bottom of this post. For each file size, we calculated the average upload time (and standard deviation) and the average transfer rate (and standard deviation). As you likely are aware, transferring data across the Internet is susceptible to many transient delays which can cause anomalies in the resulting data. It is for this reason that we randomized the order of source file processing as well as executed the tests 50x for each file size. We expect that these steps will yield a sufficiently balanced set of results. Once the baseline was collected and analyzed, we updated the test harness application with some methods to split the source file into user-defined block sizes and then to upload those blocks in parallel (using the PutBlock() method of Azure storage). The parallelization was handled by simply relying on the Parallel Extensions to .NET to provide a Parallel.For loop (see linked source for specific implementation details in Program.cs, line 173 and following… less than 100 lines total). Once all of the blocks were uploaded, we called PutBlockList() to assemble/commit the file in Azure storage. For each block transferred, the MD5 was calculated and sent ensuring that the bits that arrived matched was was intended. The timer for the blocked/parallelized transfer method wraps the entire process (source file splitting, block transfer, MD5 validation, file committal). A diagram of the process is as follows: We then tested the affects of blocking & parallelizing the transfers by running the updated application against the same source set and did a parameter sweep on the block size including 256KB, 512KB, 1MB, 2MB, and 4MB (our assumption was that anything lower than 256KB wasn’t worth the trouble and 4MB is the maximum size of a block supported by Azure). The raw data for the parallel tests is available via the links in the Related Resources section at the bottom of this post. This data was processed and then compared against the single-threaded / non-optimized transfer numbers and the results were encouraging. The Excel version of the results is available here. Two semi-obvious points need to be made prior to reviewing the data. The first is that if the block size is larger than the source file size you will end up with a “negative optimization” due to the overhead of attempting to block and parallelize. The second is that as the files get smaller, the clock-time cost of blocking and parallelizing (overhead) is more apparent and can tend towards negative optimizations. For this reason (and is supported in the raw data provided in the linked worksheet) the charts and dialog below ignore source file sizes less than 1MB. (click chart for full size image) The chart above illustrates some interesting points about the results: When the block size is smaller than the source file, performance increases but as the block size approaches and then passes the source file size, you see decreasing benefit to the point of negative gains (see the values for the 1MB file size) For some of the moderately-sized source files, small blocks (256KB) are best As the size of the source file gets larger (see values for 50MB and up), the smallest block size is not the most efficient (presumably due, at least in part, to the increased number of blocks, increased number of individual transfer requests, and reassembly/committal costs). Once you pass the 250MB source file size, the difference in rate for 1MB to 4MB blocks is more-or-less constant The 1MB block size gives the best average improvement (~16x) but the optimal approach would be to vary the block size based on the size of the source file.    (click chart for full size image) The above is another view of the same data as the prior chart just with the axis changed (x-axis represents file size and plotted data shows improvement by block size). It again highlights the fact that the 1MB block size is probably the best overall size but highlights the benefits of some of the other block sizes at different source file sizes. This last chart shows the change in total duration of the file uploads based on different block sizes for the source file sizes. Nothing really new here other than this view of the data highlights the negative affects of poorly choosing a block size for smaller files.   Summary What we have found so far is that blocking your file uploads and uploading them in parallel results in significant performance improvements. Further, utilizing extension methods and the Task Parallel Library (.NET 4.0) make short work of altering the shipping client library to provide this functionality while minimizing the amount of change to existing applications that might be using the client library for other interactions.   Related Resources Source code for upload test application Source code for random file generator ODatas feed of raw data from non-optimized transfer tests Experiment Metadata Experiment Datasets 2KB Uploads 32KB Uploads 64KB Uploads 128KB Uploads 256KB Uploads 512KB Uploads 1MB Uploads 5MB Uploads 10MB Uploads 25MB Uploads 50MB Uploads 100MB Uploads 250MB Uploads 500MB Uploads 750MB Uploads 1GB Uploads Raw Data OData feeds of raw data from blocked/parallelized transfer tests Experiment Metadata Experiment Datasets Raw Data 256KB Blocks 512KB Blocks 1MB Blocks 2MB Blocks 4MB Blocks Excel worksheet showing summarizations and comparisons

    Read the article

  • It was a figure of speech!

    - by Ratman21
    Yesterday I posted the following as attention getter / advertisement (as well as my feelings). In the groups, (I am in) on the social networking site, LinkedIn and boy did I get responses.    I am fighting mad about (a figure of speech, really) not having a job! Look just because I am over 55 and have gray hair. It does not mean, my brain is dead or I can no longer trouble shoot a router or circuit or LAN issue. Or that I can do “IT” work at all. And I could prove this if; some one would give me at job. Come on try me for 90 days at min. wage. I know you will end up keeping me (hope fully at normal pay) around. Is any one hearing me…come on take up the challenge!     This was the responses I got.   I hear you. We just need to retrain and get our skills up to speed is all. That is what I am doing. I have not given up. Just got to stay on top of the game. Experience is on our side if we have the credentials and we are reasonable about our salaries this should not be an issue.   Already on it, going back to school and have got three certifications (CompTIA A+, Security+ and Network+. I am now studying for my CISCO CCNA certification. As to my salary, I am willing to work at very reasonable rate.   You need to re-brand yourself like a product, market and sell yourself. You need to smarten up, look and feel a million dollars, re-energize yourself, regain your confidents. Either start your own business, or re-write your CV so it stands out from the rest, get the template off the internet. Contact every recruitment agent in your town, state, country and overseas, and on the web. Apply to every job you think you could do, you may not get it but you will make a contact for your network, which may lead to a job at the end of the tunnel. Get in touch with everyone you know from past jobs. Do charity work. I maintain the IT Network, stage electrical and the Telecom equipment in my church,   Again already on it. I have email the world is seems with my resume and cover letters. So far, I have rewritten or had it rewrote, my resume and cover letters; over seven times so far. Re-energize? I never lost my energy level or my self-confidents in my work (now if could get some HR personal to see the same). I also volunteer at my church, I created and maintain the church web sit.   I share your frustration. Sucks being over 50 and looking for work. Please don't sell yourself short at min wage because the employer will think that’s your worth. Keep trying!!   I never stop trying and min wage is only for 90 days. If some one takes up the challenge. Some post asked if I am keeping up technology.   Do you keep up with the latest technology and can speak the language fluidly?   Yep to that and as to speaking it also a yep! I am a geek you know. I heard from others over the 50 year mark and younger too.   I'm with you! I keep getting told that I don't have enough experience because I just recently completed a Masters level course in Microsoft SQL Server, which gave me a project-intensive equivalent of between 2 and 3 years of experience. On top of that training, I have 19 years as an applications programmer and database administrator. I can normalize rings around experienced DBAs and churn out effective code with the best of them. But my 19 years is worthless as far as most recruiters and HR people are concerned because it is not the specific experience for which they're looking. HR AND RECRUITERS TAKE NOTE: Experience, whatever the language, translates across platforms and technology! By the way, I'm also over 55 and still have "got it"!   I never lost it and I also can work rings round younger techs.   I'm 52 and female and seem to be having the same issues. I have over 10 years experience in tech support (with a BS in CIS) and can't get hired either.   Ow, I only have an AS in computer science along with my certifications.   Keep the faith, I have been unemployed since August of 2008. I agree with you...I am willing to return to the beginning of my retail career and work myself back through the ranks, if someone will look past the grey and realize the knowledge I would bring to the table.   I also would like some one to look past the gray.   Interesting approach, volunteering to work for minimum wage for 90 days. I'm in the same situation as you, being 55 & balding w/white hair, so I know where you're coming from. I've been out of work now for a year. I'm in Michigan, where the unemployment rate is estimated to be 15% (the worst in the nation) & even though I've got 30+ years of IT experience ranging from mainframe to PC desktop support, it's difficult to even get a face-to-face interview. I had one prospective employer tell me flat out that I "didn't have the energy required for this position". Mostly I never get any feedback. All I can say is good luck & try to remain optimistic.   He said WHAT! Yes remaining optimistic is key. Along with faith in God. Then there was this (for lack of better word) jerk.   Give it up already. You were too old to work in high tech 10 years ago. Scratch that, 20 years ago! Try selling hot dogs in front of Fry's Electronics. At least you would get a chance to eat lunch with your previous colleagues....   You know funny thing on this person is that I checked out his profile. He is older than I am.

    Read the article

  • CodePlex Daily Summary for Monday, June 02, 2014

    CodePlex Daily Summary for Monday, June 02, 2014Popular ReleasesPortable Class Library for SQLite: Portable Class Library for SQLite - 3.8.4.4: This pull request from mattleibow addresses an issue with custom function creation (define functions in C# code and invoke them from SQLite as id they where regular SQL functions). Impact: Xamarin iOSTweetinvi a friendly Twitter C# API: Tweetinvi 0.9.3.x: Timelines- Added all the parameters available from the Timeline Endpoints in Tweetinvi. - This is available for HomeTimeline, UserTimeline, MentionsTimeline // Simple query var tweets = Timeline.GetHomeTimeline(); // Create a parameter for queries with specific parameters var timelineParameter = Timeline.GenerateHomeTimelineRequestParameter(); timelineParameter.ExcludeReplies = true; timelineParameter.TrimUser = true; var tweets = Timeline.GetHomeTimeline(timelineParameter); Tweets- Add mis...Sandcastle Help File Builder: Help File Builder and Tools v2014.5.31.0: General InformationIMPORTANT: On some systems, the content of the ZIP file is blocked and the installer may fail to run. Before extracting it, right click on the ZIP file, select Properties, and click on the Unblock button if it is present in the lower right corner of the General tab in the properties dialog. This release completes removal of the branding transformations and implements the new VS2013 presentation style that utilizes the new lightweight website format. Several breaking cha...Tooltip Web Preview: ToolTip Web Preview: Version 1.0Database Helper: Release 1.0.0.0: First Release of Database HelperCoMaSy: CoMaSy1.0.2: !Contact Management SystemImage View Slider: Image View Slider: This is a .NET component. We create this using VB.NET. Here you can use an Image Viewer with several properties to your application form. We wish somebody to improve freely. Try this out! Author : Steven Renaldo Antony Yustinus Arjuna Purnama Putra Andre Wijaya P Martin Lidau PBK GENAP 2014 - TI UKDWAspose for Apache POI: Missing Features of Apache POI WP - v 1.1: Release contain the Missing Features in Apache POI WP SDK in Comparison with Aspose.Words for dealing with Microsoft Word. What's New ?Following Examples: Insert Picture in Word Document Insert Comments Set Page Borders Mail Merge from XML Data Source Moving the Cursor Feedback and Suggestions Many more examples are yet to come here. Keep visiting us. Raise your queries and suggest more examples via Aspose Forums or via this social coding site.SEToolbox: 01.032.014 Release 1: Added fix when loading game Textures for icons causing 'Unable to read beyond the end of the stream'. Added new Resource Report, that displays all in game resources in a concise report. Added in temp directory cleaner, to keep excess files from building up. Fixed use of colors on the windows, to work better with desktop schemes. Adding base support for multilingual resources. This will allow loading of the Space Engineers resources to show localized names, and display localized date a...ClosedXML - The easy way to OpenXML: ClosedXML 0.71.2: More memory and performance improvements. Fixed an issue with pivot table field order.Vi-AIO SearchBar: Vi – AIO Search Bar: Version 1.0Top Verses ( Ayat Emas ): Binary Top Verses: This one is the bin folder of the component. the .dll component is inside.Traditional Calendar Component: Traditional Calender Converter: Duta Wacana Christian University This file containing Traditional Calendar Component and Demo Aplication that using Traditional Calendar Component. This component made with .NET Framework 4 and the programming language is C# .SQLSetupHelper: 1.0.0.0: First Stable Version of SQL SetupComposite Iconote: Composite Iconote: This is a composite has been made by Microsoft Visual Studio 2013. Requirement: To develop this composite or use this component in your application, your computer must have .NET framework 4.5 or newer.Magick.NET: Magick.NET 6.8.9.101: Magick.NET linked with ImageMagick 6.8.9.1. Breaking changes: - Int/short Set methods of WritablePixelCollection are now unsigned. - The Q16 build no longer uses HDRI, switch to the new Q16-HDRI build if you need HDRI.fnr.exe - Find And Replace Tool: 1.7: Bug fixes Refactored logic for encoding text values to command line to handle common edge cases where find/replace operation works in GUI but not in command line Fix for bug where selection in Encoding drop down was different when generating command line in some cases. It was reported in: https://findandreplace.codeplex.com/workitem/34 Fix for "Backslash inserted before dot in replacement text" reported here: https://findandreplace.codeplex.com/discussions/541024 Fix for finding replacing...VG-Ripper & PG-Ripper: VG-Ripper 2.9.59: changes NEW: Added Support for 'GokoImage.com' links NEW: Added Support for 'ViperII.com' links NEW: Added Support for 'PixxxView.com' links NEW: Added Support for 'ImgRex.com' links NEW: Added Support for 'PixLiv.com' links NEW: Added Support for 'imgsee.me' links NEW: Added Support for 'ImgS.it' linksToolbox for Dynamics CRM 2011/2013: XrmToolBox (v1.2014.5.28): XrmToolbox improvement XrmToolBox updates (v1.2014.5.28)Fix connecting to a connection with custom authentication without saved password Tools improvement New tool!Solution Components Mover (v1.2014.5.22) Transfer solution components from one solution to another one Import/Export NN relationships (v1.2014.3.7) Allows you to import and export many to many relationships Tools updatesAttribute Bulk Updater (v1.2014.5.28) Audit Center (v1.2014.5.28) View Layout Replicator (v1.2014.5.28) Scrip...Microsoft Ajax Minifier: Microsoft Ajax Minifier 5.10: Fix for Issue #20875 - echo switch doesn't work for CSS CSS should honor the SASS source-file comments JS should allow multi-line comment directivesNew Projects[ISEN] Rendu de projet Naughty3Dogs - Pong3D: Pong3D est un jeu qui reprend le principe classique du Pong en le portant dans un environnement 3D à l'aide du langage c# et du moteur Unity3DBootstrap for MVC: Bootstrap for MVC.F. A. Q. - Najczesciej zadawane pytania: FAQForuMvc: Technifutur short projecthomework456: no.iStoody: Studies organize solution, available through app for Windows and Windows Phone.liaoliao: ???????????Price Tracker: Allows a user to track prices based on parsed emailsRoslynEval: RoslynRx Hub: Rx Hub provides server side computation which initiate by subscriber requestSharepoint Online AppCache Reset: We are an IT resource company providing Virtual IT services and custom and opensource programs for everyday needs. UnitConversionLib : Smart Unit Conversion Library in C#: Conversion of units, arithmetic operation and parsing quantities with their units on run time. Smart unit converter and conversion lib for physical quantities,

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • Are you reporting Visual Studio 2012 issues to Microsoft correctly?

    - by Tarun Arora
    Issues you may run into while using Visual Studio need to be reported to the Microsoft Product Team via the Microsoft connect site. The Microsoft team then tries to reproduce the issue using the details provided by you. If the information you provide isn’t sufficient to reproduce the issue the team tries to contact you for specifics, this not only increases the cycle time to resolution but the lack of communication also results in issues not being resolved. So, when I report an issue one part of me tells me to include as much detail about the issue as I can clubbing screen shots, repo steps, system information, visual studio version information,… the other half tells me this is so time consuming, leave it for now and come back to fill all these details later. Reporting a bug but not including the supporting information is an invitation to excuses like …     Microsoft has absolutely changed this experience for VS 2012. The Microsoft Visual Studio Feedback tool is designed to simplify the process of providing feedback and reporting issues to Microsoft that you may encounter while using Microsoft Visual Studio 2012. Note – The Microsoft Visual Studio 2012 Feedback client currently only works for VS 2012 and not any other versions of Visual Studio. Setting up the Microsoft Visual Studio 2012 Feedback client Open Visual Studio, from the Tools menu select Extension and Updates. In the Extension and Updates window, click Online from the left pane and search using the text ‘feedback’, download and Install Microsoft Visual Studio 2012 Feedback Tool by following the instructions from the wizard. Note - Restarting Visual Studio after the install is a must! How to report a bug for Visual Studio 2012? Click on the Help menu and choose Report a Bug You should see an icon Microsoft Visual Studio 2012 Feedback Tool come up in the system tray icon area You’ll need to accept the Privacy statement. You have the option of reporting the feedback as private or public. Microsoft works with several Partners, MVP’s and Vendors who get access to early bits of Microsoft products for valuation. This is where it becomes essential to report the feedback privately. I would choose the Public option otherwise. After all if it’s out there in the public, others can discover and add to it easily. You now have the option to report a new issue or add to an existing issue. Should you choose to add to an existing issue you should have the feedback ID of the issue available. This can be obtained from the Microsoft Connect site. For now I am going to focus on reporting a new feedback privately. Filling out the feedback details You will notice that VsInfo.xml and DxDiagOutput.txt are automatically attached as you enter this screen (more on that later).  Feedback Type Choose the feedback type from (Performance, Hang, Crash, Other) Note – The record button will only be enabled once you have enabled once you have chosen the feedback type, Bug-repro recording is not available for Windows Server 2008.     Effective Title and Description Enter a title that helps us differentiate the bug when it appears in a list, so that we can group it with any related bugs, assign it to a developer more effectively, and resolve it more quickly. Example: Imagine that you are submitting a bug because you tried to install Service Pack 1 and got a message that Visual Studio is not installed even though it is. Helpful:  Installed Visual Studio version not detected during Service Pack 1 setup. Not helpful:  Service Pack 1 problem. Tip: Write the problem description first, and then distil it to create a title. Example Description: Helpful: When I run Service Pack 1 Setup, I get the message "No Visual Studio version is detected" even though I have Visual Studio 2010 Ultimate and Visual C++ 2010 Express installed on my machine. Even though I uninstalled both editions, and then first reinstalled Ultimate and then Express, I still get the message. Record: Becoming a first class citizen Often a repro report is invaluable to describe and decipher the issue. Please use this feature to send actionable feedback. The record repro feature works differently depending on the feedback type you selected. Please find below details for each recording option. You can start recording simply by selecting a feedback type, and clicking on the “Record” button. When "Performance" is the bug type: When the Microsoft Visual Studio trace recorder starts, perform the actions that show the performance problem you want to report and then click on the "Stop Recording" button as soon as you experience the performance problem. Because the tool optimizes trace collection, you can run it for as long as it takes to show the problem, up to two hours. Note that, you need to stop recording as soon as the performance issue occurs, because the tool captures only the last couple minutes of your actions to optimize the trace collection. After you stop the recording, the tool takes up to two minutes to assemble the data and attach an ETLTrace.zip file to your bug report. The data includes information about Windows events and the Visual Studio code path. Note that, running the Microsoft Visual Studio trace recorder requires elevated user privilege. When "Crash" is the bug type: When the dialog box appears, select the running Visual Studio instance for which you want to show the steps that cause a crash. When the crash occurs, click on the "Stop Record" button. After you do this, two files are attached to your bug report - an AutomaticCrashDump.zip file that contains information about the crash and a ReproSteps.zip file that shows the repro steps. Repro steps are captured by Windows Problem Steps Recorder. Note that, you can pause the recording, and resume later, or for a specific step, you can add additional comments. When "Hang" is the bug type: The process for recording the steps that cause a hang resembles the one for crashes. The difference is, you can even collect a dump file after the VS hangs; start the VSFT either from the system tray or by starting a new instance of VS, select "Hang" as feedback type and click on the "Record" button. You will be prompted which VS to collect dump about, select the VS instance that hanged. VSFT collects a dump file regarding the hang, called MiniDump.zip, and attaches to your bug report. When "Other" is the bug type: When the problem step recorder starts, perform the actions that show the issue you want to report and then choose the "Stop” button. You can pause the recording, and resume later, or for a specific step, you can add additional comments. Once you’re done, ReproSteps.zip is added to your bug report. Pre-attached files It is essential for Microsoft to know what version of the the product are you currently using and what is the current configuration of your system. Note – The total size of all attachments in a bug report cannot exceed 2 GB, and every uncompressed attachment must be smaller than 512 MB. We recommend that you assemble all of your attachments, compress them together into a .zip file, and then attach the .zip file. Taking a screenshot Associate a screen shot by clicking the Take screenshot button, choose either the entire desktop, the specific monitor (useful if you are working in a multi monitor configuration) or the specific window in question. And finally … click Submit If you need further help, more details can be found here. You can view your feedback online by using the following URL “">https://connect.microsoft.com/VisualStudio/SearchResults.aspx?SearchQuery=<feedbackId>” Happy bug logging

    Read the article

  • Should I enable "Intel NIC DMA Channels"?

    - by javapowered
    I have HP DL360p Gen8 646902-xx1 I'm trying to optimize my config for low latency trading. Should I enable "Intel NIC DMA Channels"? Will that help/affect my system? From HP doc: Added a new ROM Based Setup Utility (RBSU) Advanced Performance Option menu that allows the user to enable Intel NIC DMA Channels (IOAT). This option is disabled by default. When enabled, certain networking devices may see an improvement in performance by utilizing Intel's DMA engine to offload network activity. Please consult documentation from the network adapter to determine if this feature is supported.

    Read the article

  • How to use BCDEdit to dual boot Windows installations?

    - by Ian Boyd
    What are the bcdedit commands necessary to setup dual boot between different installations of Windows?5 Background i recently installed Windows 8 onto a separate hard drive1. Now that Windows 8 in installed i want to dual-boot back to Windows 7. i have my two2 hard drives: So you can see that i have my two disks, with the partitions containing Windows: Windows 7: \\PhysicalDisk0 (partition 03) Windows 8: \\PhysicalDisk2 (partition 1) What i'm trying to figure out how is how to use bcdedit to instruct the thing that boots Windows that there is another Windows installation out there. Running bcdedit now, it shows current configuration: C:\WINDOWS\system32>bcdedit Windows Boot Manager -------------------- identifier {bootmgr} device partition=\Device\HarddiskVolume2 description Windows Boot Manager locale en-US inherit {globalsettings} integrityservices Enable default {current} resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} displayorder {current} toolsdisplayorder {memdiag} timeout 30 Windows Boot Loader ------------------- identifier {current} device partition=C: path \WINDOWS\system32\winload.exe description Windows 8 locale en-US inherit {bootloadersettings} recoverysequence {ce153eb9-3786-11e2-87c0-e740e123299f} integrityservices Enable recoveryenabled Yes allowedinmemorysettings 0x15000075 osdevice partition=C: systemroot \WINDOWS resumeobject {ce153eb7-3786-11e2-87c0-e740e123299f} nx OptIn bootmenupolicy Standard hypervisorlaunchtype Auto i cannot find any documentation on the difference between Windows Boot Manager and Windows Boot Loader. Documentation There is some documentation on Bcdedit: Technet: Command Line Reference - Bcdedit Technet: Windows Automated Installation Kit - BCDEdit Command Line Options Whitepaper - BCDEdit Commands for Boot Environment (Word Document) But they don't explain how edit the binary boot configuration data If i had to guess, i would think that a Windows Boot Manager instructs the BIOS what program it should run. That program would give the user a set of boot choices. That leaves Windows Boot Loader do be a particular boot choice, that represents a particular installation of Windows. If that is the case i would need to create a new Windows Boot Loader entry. This means i might want to use the /create parameter: /create Creates a new boot entry: bcdedit [/store filename] /create [id] /d description [/application apptype | /inherit [apptype] | /inherit DEVICE | /device] So i assume a syntax of: >bcdedit /create /d "The old Windows 7" /application osloader Where application can be one of the following types: Apptype Description BOOTSECTOR The boot sector application OSLOADER The Windows boot loader RESUME A resume application Unfortunately, the only documentation about osloader is "The Windows boot loader". i don't see how that can differentiate between Windows 8 on one hard drive, and Windows 7 on another. The other possible parameter when /create a boot loader is >bcdedit /create /D "Windows Vista" /device "The Quick Brown Fox" Unfortunately the documentation is missing for /device: /device Optional. If id is not set to a well-known identifier, the option that is used to specify the new boot entry as an additional device options entry. Since i did not set id to a well-known identifier, i must set /device to "the option that is used to specify the new boot entry as an additional device options entry". i know all those words; they're all English. But i have on idea what it is saying; those words in that order seem nonsensical. So i'm somewhat stymied. i don't want to be like Dan Stolts from Microsoft: I found no content that was particularly helpful when I hosed my machine by playing with BCDEdit. This post would have been ok if there was much more detail especially on the /set command OSDevice, etc. So once I got my machine fixed, I documented the solution and the information is here.... i mean, if a Microsoft guy can't even figure out how to use BCDEdit to edit his BCD, then what chance to i have? Bonus Reading BCDEdit Command-Line Options Bcdedit Server 2008 R2 or Windows 7 System Will NOT Boot After Making Changes To Boot Manager Using BCDEdit Visual BCD Editor4 Windows 7 and Windows 8 RTM Dual Boot Setup Footnotes 1 Since the Windows 8 installer would have damaged my Windows 7 install, i decided to unplug my "main" hard drive during the install. Which is a long-winded explanation of why the Windows 8 installer didn't detect the existing Windows 7 install. Normally the installer would have automatically created the required entries for dual-boot. Not that the reason i'm asking the question is important. 2 Really there's three drives, but the third is just bulk storage. The existence of a 3rd hard drive is irrelevant to the question. i only mention it in case someone wants to know why the screenshot has 3 hard drives when i only mention two. 3 i arbitrarily started numbering partitions at "zero"; not to imply that partitions are numbered starting at zero. i only mention partitions because i don't see how any boot-loader could do its job without knowing which partition, and which folder, an installation of Windows is located in. 4 i'm asking about BCDEdit. i tried Visual BCD Editor. It seems to be a visual BCD editor. That is to say that it's a GUI, but still uses the same terminology as BCDEdit, and requires the same knowledge that BCD doesn't document. 5 For simplicity sake we'll assume that all installation of Windows i want to dual-boot between are Windows Vista or later, making them all compatible with the BCDEdit and the binary boot loader. The alternative would require delving into the intricacies of the old ntloader. Nor am i asking about dual booting to Linux; or how to boot to a Virtual Hard Drive (vhd) image. Just modern versions of Windows on existing hard drives in the same machine. Note: You can ignore everything after the word Background. It's all pointless exposition to satisfy some people's need for "research effort" before they'll consider being helpful. Some people have even been known to summarily close questions unless there is research effort. Some people have been know to close questions if there is too much research effort. Some people close questions when i put the note saying that they can ignore everything after the Background out of spite. Some people are just grumpy.

    Read the article

  • Disable work offline in Internet Explorer 8

    - by Roberto Liffredo
    I am using Windows 7, and IE8. Every time I have GMail open in IE8, and I resume from sleep, I get the question about working offline - even if at that momoent the wireless connection is available. This is particularly sad, because all browsing is stopped until I dismiss that dialog, that may be hidden below other windows. As I am not using offline browsing at all, is it possible to disable the functionality altogether, so that I no longer get the question? A bit like in Firefox, where I simply get the error message with page unavailable. I have googled a bit, but no luck so far. Even a direct modification of the registry would be fine, for me.

    Read the article

  • Windows Server 2012 IPAM feature - "Unblock IPAM Access" error/recomended action

    - by HopelessN00b
    So, I'm in the process of setting up an IP Address Management server, using the built-in IPAM feature in Server 2012, and have run into a problem that I'm hoping someone else has successfully solved. Following the technet guide here, I've installed and configured IPAM, and have provisioned it via GPO. After verifying that the PowerShell invoke-ipamgpoprovisioning command is successful, managing the desired servers in IPAM, running gpupdate /force on the servers and refreshing my view in IPAM, I'm still getting the less-than-useful recommended action of "Unblock IPAM Access" for all servers. (First done 3 hours ago, so it's not a give-it-time-to-propagate issue.) Can't, for the life of me, seem to figure out what's causing this, find anything useful in the logs, or find much about this on Google or in the help files, so I was wondering if anyone here had any ideas about how to fix this, or even where to start looking. I'd really like to get this working, because if not, I have to resume work on creating an Excel spreadsheet for IP address management.

    Read the article

  • xfx 680i motherboard failure?

    - by Ian
    At some point last night we must have had a blip in our power, as the stove clock was blinking like it would had their been... a blip in the power. When I came into my office this morning, my desktop computer was powered down and would not turn on. Cracking the case, I can see a small blue blinking light on the front right corner of the motherboard. Unplugging the power from the PSU causes the blinking to stop. Plugging in the power causes the blinking to resume. Pressing the power button does nothing. Does anyone know what this blinking blue light means? I'm mostly curious now if it's the motherboard that has gone bad, or the power supply. I don't have any other desktop parts to use to troubleshoot these components. Any ideas? My motherboard is an XFX NFORCE 680I SLI INTEL SOCKET 775 DDR2 ( Model #: MB-N680-ISH9 )

    Read the article

  • Error: "failed to connect to wpa_supplicant - wpa_ctrl_open no such file or directory" using netcfg with wpa_supplicant

    - by user1576628
    I'm trying to set up netcfg so that I can finish installing Arch Linux (using the instructions from the Beginners' Guide and netcfg) and I passed over what was meant to be a short step. Open wifi-menu, select network, enter password. After multiple attempts, I decided to edit the profile manually, which yielded no improvement. Eventually I decided to use netfcg with the more familiar wpa_supplicant. My /etc/wpa_supplicant.conf file is as follows: network={ ssid="my_ssid" #psk="my_wireless_passcode" psk="my_wireless_passcode_hex" } (Replacing generic names with my actual ssid and psk.) And my /etc/network.d/wpa_suppl file reads: CONNECTION='wireless' DESCRIPTION='A wpa_supplicant configuration based wireless connection' INTERFACE='wlan0' SECURITY='wpa-config' WPA_CONF='/etc/wpa_supplicant.conf' IP='dhcp' My ssid is not hidden, wlan0 is the proper interface, and wpa_supplicant works fine on its own, but using netcfg wpa_suppl, it returns failed to connect to wpa_supplicant - wpa_ctrl_open no such file or directory about twelve times before finally telling me the authentication failed. What can I do to fix this?

    Read the article

  • Software to Monitor the Stability of Internet Connection

    - by Ngu Soon Hui
    Thanks to the excellent internet connection service offered by one of the best ISP in the world, the internet connection in my area is very, very unstable. I can connect some of the time, but MOST of the time the connection will just drop off ( with the error message unable to resolve host) and after a few minutes, it will resume back. If I ping the domain name directly (i.e., ping www.google.com -t in cmd command), I will get a cannot ping message. Because of the flickery nature of the connection, it's pretty hard to prove to the support staff that internet connection is unstable. So I am thinking about using one software to record down the connection situation, so that I can present to the technical staff and make sure that they have no excuse not to fix my problem. Any such software available? Edit: Of course, such software should not record my browsing habit, and must be able to monitor and record the internet connection condition even when I am not online.

    Read the article

  • Windows wait command till a system time

    - by user53864
    I have made a batch script for backup recently. Somewhere in the middle of script I'll have to wait for some time to be reached and then resume the next line of the script. I've scheduled the script at 4:00PM and after the wait command the next line should start at exactly 5:30PM. I thought of using SLEEP command but it's not sure that the commands used before the wait command will end up at certain time(due to inconsistent file sizes) but it's sure that they will be done by 5:00 or 5:10 and next it should execute wait command which waits for certain system clock. I'm checking if there is any command that waits or sleeps until the time specified reaches the system time and resumes there after. Anybody came across this situation and how was that resolved?

    Read the article

  • Restart Fibre channel controller after blade bootup IBM HS bladecentre

    - by Spence
    I have a remote system that needs to resume on startup. If the system is simply powered on then the blades boot before the SAN is online and then the only thing you can do is restart the systems. Is it possible to restart the fibre channel controller? That way I could have a system restart the controller after boot, connect to the SAN and then restart all servers requiring SAN information? Please note that I'm not a sys admin, just shooting for ideas to get a clean startup to work, apologies if my terminology is wrong.

    Read the article

  • Add server 2008 to 2003 domain schema upgrade failed

    - by Ken
    I'm trying to add a server 2008 r2 server to an existing 2003 domain (upgraded to 2003 functionality). I've followed the steps from microsoft which are clarified by this post: 2003 DC AD upgrade to 2008 on second server migration plan While running adprep /forestprep I lost my connection and wasn't able to resume or remote control that session, so I couldn't see the end result of the command. Rerunning adprep /forestprep indicates that the process has already been completed successfully. After finishing the rest of the steps (/domainprep ... and /gpprep, etc), the 2008 server won't join. The error message is the same "you need to run forestprep first" So the situation I'm in is that I can't rerun /forestprep, but my Registry key still reads schemaVer=30. Should I have staged forest upgrades? Any ideas how to get my schema ver to 44 at this point?

    Read the article

< Previous Page | 42 43 44 45 46 47 48 49 50 51 52 53  | Next Page >