Search Results

Search found 20857 results on 835 pages for 'technical support'.

Page 487/835 | < Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >

  • Unlocking Productivity

    - by Michael Snow
    Unlocking Productivity in Life Sciences with Consolidated Content Management by Joe Golemba, Vice President, Product Management, Oracle WebCenter As life sciences organizations look to become more operationally efficient, the ability to effectively leverage information is a competitive advantage. Whether data mining at the drug discovery phase or prepping the sales team before a product launch, content management can play a key role in developing, organizing, and disseminating vital information. The goal of content management is relatively straightforward: put the information that people need where they can find it. A number of issues can complicate this; information sits in many different systems, each of those systems has its own security, and the information in those systems exists in many different formats. Identifying and extracting pertinent information from mountains of farflung data is no simple job, but the alternative—wasted effort or even regulatory compliance issues—is worse. An integrated information architecture can enable health sciences organizations to make better decisions, accelerate clinical operations, and be more competitive. Unstructured data matters Often when we think of drug development data, we think of structured data that fits neatly into one or more research databases. But structured data is often directly supported by unstructured data such as experimental protocols, reaction conditions, lot numbers, run times, analyses, and research notes. As life sciences companies seek integrated views of data, they are typically finding diverse islands of data that seemingly have no relationship to other data in the organization. Information like sales reports or call center reports can be locked into siloed systems, and unavailable to the discovery process. Additionally, in the increasingly networked clinical environment, Web pages, instant messages, videos, scientific imaging, sales and marketing data, collaborative workspaces, and predictive modeling data are likely to be present within an organization, and each source potentially possesses information that can help to better inform specific efforts. Historically, content management solutions that had 21CFR Part 11 capabilities—electronic records and signatures—were focused mainly on content-enabling manufacturing-related processes. Today, life sciences companies have many standalone repositories, requiring different skills, service level agreements, and vendor support costs to manage them. With the amount of content doubling every three to six months, companies have recognized the need to manage unstructured content from the beginning, in order to increase employee productivity and operational efficiency. Using scalable and secure enterprise content management (ECM) solutions, organizations can better manage their unstructured content. These solutions can also be integrated with enterprise resource planning (ERP) systems or research systems, making content available immediately, in the context of the application and within the flow of the employee’s typical business activity. Administrative safeguards—such as content de-duplication—can also be applied within ECM systems, so documents are never recreated, eliminating redundant efforts, ensuring one source of truth, and maintaining content standards in the organization. Putting it in context Consolidating structured and unstructured information in a single system can greatly simplify access to relevant information when it is needed through contextual search. Using contextual filters, results can include therapeutic area, position in the value chain, semantic commonalities, technology-specific factors, specific researchers involved, or potential business impact. The use of taxonomies is essential to organizing information and enabling contextual searches. Taxonomy solutions are composed of a hierarchical tree that defines the relationship between different life science terms. When overlaid with additional indexing related to research and/or business processes, it becomes possible to effectively narrow down the amount of data that is returned during searches, as well as prioritize results based on specific criteria and/or prior search history. Thus, search results are more accurate and relevant to an employee’s day-to-day work. For example, a search for the word "tissue" by a lab researcher would return significantly different results than a search for the same word performed by someone in procurement. Of course, diverse data repositories, combined with the immense amounts of data present in an organization, necessitate that the data elements be regularly indexed and cached beforehand to enable reasonable search response times. In its simplest form, indexing of a single, consolidated data warehouse can be expected to be a relatively straightforward effort. However, organizations require the ability to index multiple data repositories, enabling a single search to reference multiple data sources and provide an integrated results listing. Security and compliance Beyond yielding efficiencies and supporting new insight, an enterprise search environment can support important security considerations as well as compliance initiatives. For example, the systems enable organizations to retain the relevance and the security of the indexed systems, so users can only see the results to which they are granted access. This is especially important as life sciences companies are working in an increasingly networked environment and need to provide secure, role-based access to information across multiple partners. Although not officially required by the 21 CFR Part 11 regulation, the U.S. Food and Drug Administraiton has begun to extend the type of content considered when performing relevant audits and discoveries. Having an ECM infrastructure that provides centralized management of all content enterprise-wide—with the ability to consistently apply records and retention policies along with the appropriate controls, validations, audit trails, and electronic signatures—is becoming increasingly critical for life sciences companies. Making the move Creating an enterprise-wide ECM environment requires moving large amounts of content into a single enterprise repository, a daunting and risk-laden initiative. The first key is to focus on data taxonomy, allowing content to be mapped across systems. The second is to take advantage new tools which can dramatically speed and reduce the cost of the data migration process through automation. Additional content need not be frozen while it is migrated, enabling productivity throughout the process. The ability to effectively leverage information into success has been gaining importance in the life sciences industry for years. The rapid adoption of enterprise content management, both in operational processes as well as in scientific management, are clear indicators that the companies are looking to use all available data to be better informed, improve decision making, minimize risk, and increase time to market, to maintain profitability and be more competitive. As more and more varieties and sources of information are brought under the strategic management umbrella, the ability to divine knowledge from the vast pool of information is increasingly difficult. Simple search engines and basic content management are increasingly unable to effectively extract the right information from the mountains of data available. By bringing these tools into context and integrating them with business processes and applications, we can effectively focus on the right decisions that make our organizations more profitable. More Information Oracle will be exhibiting at DIA 2012 in Philadelphia on June 25-27. Stop by our booth Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} (#2825) to learn more about the advantages of a centralized ECM strategy and see the Oracle WebCenter Content solution, our 21 CFR Part 11 compliant content management platform.

    Read the article

  • Corsair Hackers Reboot

    It wasn't easy for me to attend but it was absolutely worth to go. The Linux User Group of Mauritius (LUGM) organised another get-together for any open source enthusiast here on the island. Strangely named "Corsair Hackers Reboot" but it stands for a positive cause: "Corsair Hackers Reboot Event A collaborative activity involving LUGM, UoM Computer Club, Fortune Way Shopping Mall and several geeks from around the island, striving to put FOSS into homes & offices. The public is invited to discover and explore Free Software & Open Source." And it was a good opportunity for me and the kids to visit the east coast of Mauritius, too. Perfect timing It couldn't have been better... Why? Well, for two important reasons (in terms of IT): End of support for Microsoft Windows XP - 08.04.2014 Release of Ubuntu 14.04 Long Term Support - 17.04.2014 Quite funnily, those two IT dates weren't the initial reasons and only during the weeks of preparations we put those together. And therefore it was even more positive to promote the use of Linux and open source software in general to a broader audience. Getting there ... Thanks to the new motor way M3 and all the additional road work which has been completed recently it was very simple to get across the island in a very quick and relaxed manner. Compared to my trips in the early days of living in Mauritius (and riding on a scooter) it was very smooth and within less than an hour we hit Centrale de Flacq. Well, being in the city doesn't necessarily mean that one has arrived at the destination. But thanks to modern technology I had a quick look on Google Maps, and we finally managed to get a parking behind the huge bus terminal in Flacq. From there it was just a short walk to Fortune Way. The children were trying to count the number of buses... Well, lots and lots of buses - really impressive actually. What was presented? There were different areas set up. Right at the entrance one's attention was directly drawn towards the elevated hacker's stage. Similar to rock stars performing their gig there was bunch of computers, laptops and networking equipment in order to cater the right working conditions for coding/programming challenge(s) on the one hand and for the pen-testing or system hacking competition on the other hand. Personally, I was very impresses that actually Nitin took care of the pen-testing competition. He hardly started one year back with Linux in general, and Kali Linux specifically. Seeing his personal development from absolute newbie to a decent Linux system administrator within such a short period of time, is really impressive. His passion to open source software made him a living. Next, clock-wise seen, was the Kid's Corner with face-painting as the main attraction. Additionally, there were numerous paper print outs to colour. Plus a decent workstation with the educational suite GCompris. Of course, my little ones were into that. They already know GCompris since a while as they are allowed to use it on an IGEL thin client terminal here at home. To simplify my life, I set up GCompris as full-screen guest session on the server, and they can pass the login screen without any further obstacles. And because it's a thin client hooked up to a XDMCP remote session I don't have to worry about the hardware on their desk, too. The next section was the main attraction of the event: BYOD - Bring Your Own Device Well, compared to the usual context of BYOD the corsairs had a completely different intention. Here, you could bring your own laptop and a team of knowledgeable experts - read: geeks and so on - offered to fully convert your system on any Linux distribution of your choice. And even though I came later, I was told that the USB pen drives had been in permanent use. From being prepared via dd command over launching LiveCD session to finally installing a fresh Linux system on bare metal. Most interestingly, I did a similar job already a couple of months ago, while upgrading an existing Windows XP system to Xubuntu 13.10. So far, the female owner is very happy and enjoys her system almost every evening to go shopping online, checking mails, and reading latest news from the Anime world. Back to the Hackers event, Ish told me that they managed approximately 20 conversion during the day. Furthermore, Ajay and others gladly assisted some visitors with some tricky issues and by the end of the day you can call is a success. While I was around, there was a elderly male visitor that got a full-fledged system conversion to a Linux system running completely in French language. A little bit more to the centre it was Yasir's turn to demonstrate his Arduino hardware that he hooked up with an experimental electrical circuit board connected to an LCD matrix display. That's the real spirit of hacking, and he showed some minor adjustments on the fly while demo'ing the system. Also, very interesting there was a thermal sensor around. Personally, I think that platforms like the Arduino as well as the Raspberry Pi have a great potential at a very affordable price in order to bring a better understanding of electronics as well as computer programming to a broader audience. It would be great to see more of those experiments during future activities. And last but not least there were a small number of vendors. Amongst them was Emtel - once again as sponsor of the general internet connectivity - and another hardware supplier from Riche Terre shopping mall. They had a good collection of Android related gimmicks, like a autonomous web cam that can convert any TV with HDMI connector into an online video chat system given WiFi. It's actually kind of awesome to have a Skype or Google hangout video session on the big screen rather than on the laptop. Some pictures of the event LUGM: Great conversations on Linux, open source and free software during the Corsair Hackers Reboot LUGM: Educational workstation running GCompris suite attracted the youngest attendees of the day. Of course, face painting had to be done prior to hacking... LUGM: Nadim demoing some Linux specifics to interested visitors. Everyone was pretty busy during the whole day LUGM: The hacking competition, here pen-testing a wireless connection and access point between multiple machines LUGM: Well prepared workstations to be able to 'upgrade' visitors' machines to any Linux operating system Final thoughts Gratefully, during the preparations of the event I was invited to leave some comments or suggestions, and the team of the LUGM did a great job. The outdoor banner was a eye-catcher, the various flyers and posters for the event were clearly written and as far as I understood from the quick chats I had with Ish, Nadim, Nitin, Ajay, and of course others all were very happy about the event execution. Great job, LUGM! And I'm already looking forward to the next Corsair Hackers Reboot event ... Crossing fingers: Very soon and hopefully this year again :) Update: In the media The event had been announced in local media, too. L'Express: Salon informatique: Hacking Challenge à Flacq

    Read the article

  • PostSharp, Obfuscation, and IL

    - by Simon Cooper
    Aspect-oriented programming (AOP) is a relatively new programming paradigm. Originating at Xerox PARC in 1994, the paradigm was first made available for general-purpose development as an extension to Java in 2001. From there, it has quickly been adapted for use in all the common languages used today. In the .NET world, one of the primary AOP toolkits is PostSharp. Attributes and AOP Normally, attributes in .NET are entirely a metadata construct. Apart from a few special attributes in the .NET framework, they have no effect whatsoever on how a class or method executes within the CLR. Only by using reflection at runtime can you access any attributes declared on a type or type member. PostSharp changes this. By declaring a custom attribute that derives from PostSharp.Aspects.Aspect, applying it to types and type members, and running the resulting assembly through the PostSharp postprocessor, you can essentially declare 'clever' attributes that change the behaviour of whatever the aspect has been applied to at runtime. A simple example of this is logging. By declaring a TraceAttribute that derives from OnMethodBoundaryAspect, you can automatically log when a method has been executed: public class TraceAttribute : PostSharp.Aspects.OnMethodBoundaryAspect { public override void OnEntry(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Entering {0}.{1}.", method.DeclaringType.FullName, method.Name)); } public override void OnExit(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Leaving {0}.{1}.", method.DeclaringType.FullName, method.Name)); } } [Trace] public void MethodToLog() { ... } Now, whenever MethodToLog is executed, the aspect will automatically log entry and exit, without having to add the logging code to MethodToLog itself. PostSharp Performance Now this does introduce a performance overhead - as you can see, the aspect allows access to the MethodBase of the method the aspect has been applied to. If you were limited to C#, you would be forced to retrieve each MethodBase instance using Type.GetMethod(), matching on the method name and signature. This is slow. Fortunately, PostSharp is not limited to C#. It can use any instruction available in IL. And in IL, you can do some very neat things. Ldtoken C# allows you to get the Type object corresponding to a specific type name using the typeof operator: Type t = typeof(Random); The C# compiler compiles this operator to the following IL: ldtoken [mscorlib]System.Random call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle( valuetype [mscorlib]System.RuntimeTypeHandle) The ldtoken instruction obtains a special handle to a type called a RuntimeTypeHandle, and from that, the Type object can be obtained using GetTypeFromHandle. These are both relatively fast operations - no string lookup is required, only direct assembly and CLR constructs are used. However, a little-known feature is that ldtoken is not just limited to types; it can also get information on methods and fields, encapsulated in a RuntimeMethodHandle or RuntimeFieldHandle: // get a MethodBase for String.EndsWith(string) ldtoken method instance bool [mscorlib]System.String::EndsWith(string) call class [mscorlib]System.Reflection.MethodBase [mscorlib]System.Reflection.MethodBase::GetMethodFromHandle( valuetype [mscorlib]System.RuntimeMethodHandle) // get a FieldInfo for the String.Empty field ldtoken field string [mscorlib]System.String::Empty call class [mscorlib]System.Reflection.FieldInfo [mscorlib]System.Reflection.FieldInfo::GetFieldFromHandle( valuetype [mscorlib]System.RuntimeFieldHandle) These usages of ldtoken aren't usable from C# or VB, and aren't likely to be added anytime soon (Eric Lippert's done a blog post on the possibility of adding infoof, methodof or fieldof operators to C#). However, PostSharp deals directly with IL, and so can use ldtoken to get MethodBase objects quickly and cheaply, without having to resort to string lookups. The kicker However, there are problems. Because ldtoken for methods or fields isn't accessible from C# or VB, it hasn't been as well-tested as ldtoken for types. This has resulted in various obscure bugs in most versions of the CLR when dealing with ldtoken and methods, and specifically, generic methods and methods of generic types. This means that PostSharp was behaving incorrectly, or just plain crashing, when aspects were applied to methods that were generic in some way. So, PostSharp has to work around this. Without using the metadata tokens directly, the only way to get the MethodBase of generic methods is to use reflection: Type.GetMethod(), passing in the method name as a string along with information on the signature. Now, this works fine. It's slower than using ldtoken directly, but it works, and this only has to be done for generic methods. Unfortunately, this poses problems when the assembly is obfuscated. PostSharp and Obfuscation When using ldtoken, obfuscators don't affect how PostSharp operates. Because the ldtoken instruction directly references the type, method or field within the assembly, it is unaffected if the name of the object is changed by an obfuscator. However, the indirect loading used for generic methods was breaking, because that uses the name of the method when the assembly is put through the PostSharp postprocessor to lookup the MethodBase at runtime. If the name then changes, PostSharp can't find it anymore, and the assembly breaks. So, PostSharp needs to know about any changes an obfuscator does to an assembly. The way PostSharp does this is by adding another layer of indirection. When PostSharp obfuscation support is enabled, it includes an extra 'name table' resource in the assembly, consisting of a series of method & type names. When PostSharp needs to lookup a method using reflection, instead of encoding the method name directly, it looks up the method name at a fixed offset inside that name table: MethodBase genericMethod = typeof(ContainingClass).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: get_Prop1 21: set_Prop1 22: DoFoo 23: GetWibble When the assembly is later processed by an obfuscator, the obfuscator can replace all the method and type names within the name table with their new name. That way, the reflection lookups performed by PostSharp will now use the new names, and everything will work as expected: MethodBase genericMethod = typeof(#kGy).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: #kkA 21: #zAb 22: #EF5a 23: #2tg As you can see, this requires direct support by an obfuscator in order to perform these rewrites. Dotfuscator supports it, and now, starting with SmartAssembly 6.6.4, SmartAssembly does too. So, a relatively simple solution to a tricky problem, with some CLR bugs thrown in for good measure. You don't see those every day!

    Read the article

  • 7 Good Reasons to Upgrade E-Business Suite to the cloud

    - by Lisa Schwartz
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} As promised here is blog Part 2: Why Upgrade to Oracle E-Business Suite 12 in the cloud? 7 Good Reasons to Upgrade to E-Business Suite 12 in the Cloud: 1)   Take advantage of new and improved features: from global sub-ledger accounting to mobile access for supply chain management to built-in extensions for information search and discovery. If you haven’t checked out the latest features yet, there are over 1000 EBS 12 enhancements. 2) Plan now to address any ongoing Oracle Support considerations and regulatory compliance requirements. EBS Release 11 support is ending soon. Based upon that information alone, you should have an EBS upgrade strategy and planning well underway. 3) Customizations got you worried? Expedite your next Oracle E-Business Suite upgrade – have Oracle identify all customizations, reduce un-needed customizations (EBS 12 has built-in many of your customizations) and during the upgrade keep all necessary customizations to run your business. 4) Migrating EBS to the cloud allows parallel migration and testing. Therefore no extra hardware purchases for the testing and upgrade. Business disruption is minimized. And, by moving to the cloud, this provides for smoother future upgrades that are based on your own timeline. 5) Oracle Experts will upgrade and run your EBS applications for you in the cloud. Free your IT resources to develop new services and work on projects that are critical to business innovation and competitiveness. Your IT resources will not be inundated with upgrade tasks!      6) Reallocate precious IT dollars to other projects, eliminate CapEx costs. 7) Oracle minimizes business risk by having enterprise class cloud services under stringent SLAs designed to run your business applications for you such as: a. Enterprise grade infrastructure b. World-class security and identity management c. Best practices in regulatory compliance: from classified federal gov’t standards, to healthcare HIPPA standards to meeting Financial Services requirements (PCI DSS) Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} 7 Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Next Step: To help you upgrade and get to the cloud in the shortest period of  time, Oracle has a program called Oracle Upgrade Factory for Oracle E-Business Suite 12. It offers a unique approach, seamlessly bundling Managed Cloud Services and Oracle Consulting Services together for an entire Oracle E-Business Suite upgrade and migration to a managed private  cloud. Read the Oracle Upgrade Factory Solution Brief here. Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • CodePlex Daily Summary for Thursday, September 06, 2012

    CodePlex Daily Summary for Thursday, September 06, 2012Popular Releasesmenu4web: menu4web 0.4.1 - javascript menu for web sites: This release is for those who believe that global variables are evil. menu4web has been wrapped into m4w singleton object. Added "Vertical Tabs" example which illustrates object notation.WinRT XAML Toolkit: WinRT XAML Toolkit - 1.2.1: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...iPDC - Free Phasor Data Concentrator: iPDC-v1.3.1: iPDC suite version-1.3.1, Modifications and Bug Fixed (from v 1.3.0) New User Manual for iPDC-v1.3.1 available on websites. Bug resolved : PMU Simulator TCP connection error and hang connection for client (PDC). Now PMU Simulator (server) can communicate more than one PDCs (clients) over TCP and UDP parallely. PMU Simulator is now sending the exact data frames as mentioned in data rate by user. PMU Simulator data rate has been verified by iPDC database entries and PMU Connection Tes...Microsoft SQL Server Product Samples: Database: AdventureWorks OData Feed: The AdventureWorks OData service exposes resources based on specific SQL views. The SQL views are a limited subset of the AdventureWorks database that results in several consuming scenarios: CompanySales Documents ManufacturingInstructions ProductCatalog TerritorySalesDrilldown WorkOrderRouting How to install the sample You can consume the AdventureWorks OData feed from http://services.odata.org/AdventureWorksV3/AdventureWorks.svc. You can also consume the AdventureWorks OData fe...Desktop Google Reader: 1.4.6: Sorting feeds alphabetical is now optional (see preferences window)DotNetNuke® Community Edition CMS: 06.02.03: Major Highlights Fixed issue where mailto: links were not working when sending bulk email Fixed issue where uses did not see friendship relationships Problem is in 6.2, which does not show in the Versions Affected list above. Fixed the issue with cascade deletes in comments in CoreMessaging_Notification Fixed UI issue when using a date fields as a required profile property during user registration Fixed error when running the product in debug mode Fixed visibility issue when...Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.65: Fixed null-reference error in the build task constructor.Active Forums for DotNetNuke CMS: Active Forums 5.0.0 RC: RC release of Active Forums 5.0.Droid Explorer: Droid Explorer 0.8.8.7 Beta: Bug in the display icon for apk's, will fix with next release Added fallback icon if unable to get the image/icon from the Cloud Service Removed some stale plugins that were either out dated or incomplete. Added handler for *.ab files for restoring backups Added plugin to create device backups Backups stored in %USERPROFILE%\Android Backups\%DEVICE_ID%\ Added custom folder icon for the android backups directory better error handling for installing an apk bug fixes for the Runn...BI System Monitor: v2.1: Data Audits report and supporting SQL, and SSIS package Environment Overview report enhancements, improving the appearance, addition of data audit finding indicators Note: SQL 2012 version coming soon.Hidden Capture (HC): Hidden Capture 1.1: Hidden Capture 1.1 by Mohsen E.Dawatgar http://Hidden-Capture.blogfa.comExt Spec: Ext Spec 0.2.1: Refined examples and improved distribution options.The Visual Guide for Building Team Foundation Server 2012 Environments: Version 1: --Nearforums - ASP.NET MVC forum engine: Nearforums v8.5: Version 8.5 of Nearforums, the ASP.NET MVC Forum Engine. New features include: Built-in search engine using Lucene.NET Flood control improvements Notifications improvements: sync option and mail body View Roadmap for more details webdeploy package sha1 checksum: 961aff884a9187b6e8a86d68913cdd31f8deaf83WiX Toolset: WiX Toolset v3.6: WiX Toolset v3.6 introduces the Burn bootstrapper/chaining engine and support for Visual Studio 2012 and .NET Framework 4.5. Other minor functionality includes: WixDependencyExtension supports dependency checking among MSI packages. WixFirewallExtension supports more features of Windows Firewall. WixTagExtension supports Software Id Tagging. WixUtilExtension now supports recursive directory deletion. Melt simplifies pure-WiX patching by extracting .msi package content and updating .w...Iveely Search Engine: Iveely Search Engine (0.2.0): ????ISE?0.1.0??,?????,ISE?0.2.0?????????,???????,????????20???follow?ISE,????,??ISE??????????,??????????,?????????,?????????0.2.0??????,??????????。 Iveely Search Engine ?0.2.0?????????“??????????”,??????,?????????,???????,???????????????????,????、????????????。???0.1.0????????????: 1. ??“????” ??。??????????,?????????,???????????????????。??:????????,????????????,??????????????????。??????。 2. ??“????”??。?0.1.0??????,???????,???????????????,?????????????,????????,?0.2.0?,???????...GmailDefaultMaker: GmailDefaultMaker 3.0.0.2: Add QQ Mail BugfixSmart Data Access layer: Smart Data access Layer Ver 3: In this version support executing inline query is added. Check Documentation section for detail.DotNetNuke® Form and List: 06.00.04: DotNetNuke Form and List 06.00.04 Don't forget to backup your installation before upgrade. Changes in 06.00.04 Fix: Sql Scripts for 6.003 missed object qualifiers within stored procedures Fix: added missing resource "cmdCancel.Text" in form.ascx.resx Changes in 06.00.03 Fix: MakeThumbnail was broken if the application pool was configured to .Net 4 Change: Data is now stored in nvarchar(max) instead of ntext Changes in 06.00.02 The scripts are now compatible with SQL Azure, tested in a ne...Coevery - Free CRM: Coevery 1.0.0.24: Add a sample database, and installation instructions.New ProjectsAny-Service: AnyService is a .net 4.0 Windows service shell. It hosts any windows application in non-gui mode to run as a service.BabyCloudDrives - the multi cloud drive desktop's application: wpf ????BLACK ORANGE: Download The HPAD TEXT EDITOR and use it Wisely.. CodePlex New Release Checker: CodePlex New Release Checker is a small library that makes it easy to add, "New Version Available!" functionality to your CodePlex project.Collect: ????????!CSVManager: CSV??CSV?????,????CSV??,??????Exam Project: My Exam Project. Computer Vision, C and OpenCV-FTP: Hey guys thanks for checking out my ftp!Haushaltsbuch: 1ModMaker.Lua: ModMaker.Lua is an open source .NET library that parses and executes Lua code.MyJabbr: MyJabbr netduinoscope: Design shield and software to use netduino as oscilloscopeNetSurveillance Web Application: Net Surveillance Web ApplicationNiconicoApiHelper: ????API?????????OStega: A simple library for encrypt text into an bmp or png image.OURORM: ormTFS Cloud Deployment Toolkit: The TFS Cloud Deployment Toolkit is a set of tools that integrate with TFS 2010 to help manage configuration and deployment to various remote environments.The Visual Guide for Building Team Foundation Server 2012 Environments: A step-by-step guide for building Team Foundation Server 2012 environments that include SharePoint Server 2010, SQL Server 2012, Windows Server 2012 and more!WinRT LineChart: An attempt at creating an usable LineChart for everyone to use in his/her own Windows 8 Apps

    Read the article

  • PostSharp, Obfuscation, and IL

    - by simonc
    Aspect-oriented programming (AOP) is a relatively new programming paradigm. Originating at Xerox PARC in 1994, the paradigm was first made available for general-purpose development as an extension to Java in 2001. From there, it has quickly been adapted for use in all the common languages used today. In the .NET world, one of the primary AOP toolkits is PostSharp. Attributes and AOP Normally, attributes in .NET are entirely a metadata construct. Apart from a few special attributes in the .NET framework, they have no effect whatsoever on how a class or method executes within the CLR. Only by using reflection at runtime can you access any attributes declared on a type or type member. PostSharp changes this. By declaring a custom attribute that derives from PostSharp.Aspects.Aspect, applying it to types and type members, and running the resulting assembly through the PostSharp postprocessor, you can essentially declare 'clever' attributes that change the behaviour of whatever the aspect has been applied to at runtime. A simple example of this is logging. By declaring a TraceAttribute that derives from OnMethodBoundaryAspect, you can automatically log when a method has been executed: public class TraceAttribute : PostSharp.Aspects.OnMethodBoundaryAspect { public override void OnEntry(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Entering {0}.{1}.", method.DeclaringType.FullName, method.Name)); } public override void OnExit(MethodExecutionArgs args) { MethodBase method = args.Method; System.Diagnostics.Trace.WriteLine( String.Format( "Leaving {0}.{1}.", method.DeclaringType.FullName, method.Name)); } } [Trace] public void MethodToLog() { ... } Now, whenever MethodToLog is executed, the aspect will automatically log entry and exit, without having to add the logging code to MethodToLog itself. PostSharp Performance Now this does introduce a performance overhead - as you can see, the aspect allows access to the MethodBase of the method the aspect has been applied to. If you were limited to C#, you would be forced to retrieve each MethodBase instance using Type.GetMethod(), matching on the method name and signature. This is slow. Fortunately, PostSharp is not limited to C#. It can use any instruction available in IL. And in IL, you can do some very neat things. Ldtoken C# allows you to get the Type object corresponding to a specific type name using the typeof operator: Type t = typeof(Random); The C# compiler compiles this operator to the following IL: ldtoken [mscorlib]System.Random call class [mscorlib]System.Type [mscorlib]System.Type::GetTypeFromHandle( valuetype [mscorlib]System.RuntimeTypeHandle) The ldtoken instruction obtains a special handle to a type called a RuntimeTypeHandle, and from that, the Type object can be obtained using GetTypeFromHandle. These are both relatively fast operations - no string lookup is required, only direct assembly and CLR constructs are used. However, a little-known feature is that ldtoken is not just limited to types; it can also get information on methods and fields, encapsulated in a RuntimeMethodHandle or RuntimeFieldHandle: // get a MethodBase for String.EndsWith(string) ldtoken method instance bool [mscorlib]System.String::EndsWith(string) call class [mscorlib]System.Reflection.MethodBase [mscorlib]System.Reflection.MethodBase::GetMethodFromHandle( valuetype [mscorlib]System.RuntimeMethodHandle) // get a FieldInfo for the String.Empty field ldtoken field string [mscorlib]System.String::Empty call class [mscorlib]System.Reflection.FieldInfo [mscorlib]System.Reflection.FieldInfo::GetFieldFromHandle( valuetype [mscorlib]System.RuntimeFieldHandle) These usages of ldtoken aren't usable from C# or VB, and aren't likely to be added anytime soon (Eric Lippert's done a blog post on the possibility of adding infoof, methodof or fieldof operators to C#). However, PostSharp deals directly with IL, and so can use ldtoken to get MethodBase objects quickly and cheaply, without having to resort to string lookups. The kicker However, there are problems. Because ldtoken for methods or fields isn't accessible from C# or VB, it hasn't been as well-tested as ldtoken for types. This has resulted in various obscure bugs in most versions of the CLR when dealing with ldtoken and methods, and specifically, generic methods and methods of generic types. This means that PostSharp was behaving incorrectly, or just plain crashing, when aspects were applied to methods that were generic in some way. So, PostSharp has to work around this. Without using the metadata tokens directly, the only way to get the MethodBase of generic methods is to use reflection: Type.GetMethod(), passing in the method name as a string along with information on the signature. Now, this works fine. It's slower than using ldtoken directly, but it works, and this only has to be done for generic methods. Unfortunately, this poses problems when the assembly is obfuscated. PostSharp and Obfuscation When using ldtoken, obfuscators don't affect how PostSharp operates. Because the ldtoken instruction directly references the type, method or field within the assembly, it is unaffected if the name of the object is changed by an obfuscator. However, the indirect loading used for generic methods was breaking, because that uses the name of the method when the assembly is put through the PostSharp postprocessor to lookup the MethodBase at runtime. If the name then changes, PostSharp can't find it anymore, and the assembly breaks. So, PostSharp needs to know about any changes an obfuscator does to an assembly. The way PostSharp does this is by adding another layer of indirection. When PostSharp obfuscation support is enabled, it includes an extra 'name table' resource in the assembly, consisting of a series of method & type names. When PostSharp needs to lookup a method using reflection, instead of encoding the method name directly, it looks up the method name at a fixed offset inside that name table: MethodBase genericMethod = typeof(ContainingClass).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: get_Prop1 21: set_Prop1 22: DoFoo 23: GetWibble When the assembly is later processed by an obfuscator, the obfuscator can replace all the method and type names within the name table with their new name. That way, the reflection lookups performed by PostSharp will now use the new names, and everything will work as expected: MethodBase genericMethod = typeof(#kGy).GetMethod(GetNameAtIndex(22)); PostSharp.NameTable resource: ... 20: #kkA 21: #zAb 22: #EF5a 23: #2tg As you can see, this requires direct support by an obfuscator in order to perform these rewrites. Dotfuscator supports it, and now, starting with SmartAssembly 6.6.4, SmartAssembly does too. So, a relatively simple solution to a tricky problem, with some CLR bugs thrown in for good measure. You don't see those every day! Cross posted from Simple Talk.

    Read the article

  • ODEE Green Field (Windows) Part 4 - Documaker

    - by AndyL-Oracle
    Welcome back! We're about nearing completion of our installation of Oracle Documaker Enterprise Edition ("ODEE") in a green field. In my previous post, I covered the installation of SOA Suite for WebLogic. Before that, I covered the installation of WebLogic, and Oracle 11g database - all of which constitute the prerequisites for installing ODEE. Naturally, if your environment already has a WebLogic server and Oracle database, then you can skip all those components and go straight for the heart of the installation of ODEE. The ODEE installation is comprised of two procedures, the first covers the installation, which is running the installer and answering some questions. This will lay down the files necessary to install into the tiers (e.g. database schemas, WebLogic domains, etcetera). The second procedure is to deploy the configuration files into the various components (e.g. deploy the database schemas, WebLogic domains, SOA composites, etcetera). I will segment my posts accordingly! Let's get started, shall we? Unpack the installation files into a temporary directory location. This should extract a zip file. Extract that zip file into the temporary directory location. Navigate to and execute the installer in Disk1/setup.exe. You may have to allow the program to run if User Account Control is enabled. Once the dialog below is displayed, click Next. Select your ODEE Home - inside this directory is where all the files will be deployed. For ease of support, I recommend using the default, however you can put this wherever you want. Click Next. Select the database type, database connection type – note that the database name should match the value used for the connection type (e.g. if using SID, then the name should be IDMAKER; if using ServiceName, the name should be “idmaker.us.oracle.com”). Verify whether or not you want to enable advanced compression. Note: if you are not licensed for Oracle 11g Advanced Compression option do not use this option! Terrible, terrible calamities will befall you if you do! Click Next. Enter the Documaker Admin user name (default "dmkr_admin" is recommended for support purposes) and set the password. Update the System name and ID (must be unique) if you want/need to - since this is a green field install you should be able to use the default System ID. The only time you'd change this is if you were, for some reason, installing a new ODEE system into an existing schema that already had a system. Click Next. Enter the Assembly Line user name (default "dmkr_asline" is recommended) and set the password. Update the Assembly Line name and ID (must be unique) if you want/need to - it's quite possible that at some point you will create another assembly line, in which case you have several methods of doing so. One is to re-run the installer, and in this case you would pick a different assembly line ID and name. Click Next. Note: you can set the DB folder if needed (typically you don’t – see ODEE Installation Guide for specifics. Select the appropriate Application Server type - in this case, our green field install is going to use WebLogic - set the username to weblogic (this is required) and specify your chosen password. This credential will be used to access the application server console/control panel. Keep in mind that there are specific criteria on password choices that are required by WebLogic, but are not enforced by the installer (e.g. must contain a number, must be of a certain length, etcetera). Choose a strong password. Set the connection information for the JMS server. Note that for the 12.3.x version, the installer creates a separate JVM (WebLogic managed server) that hosts the JMS server, whereas prior editions place the JMS server on the AdminServer.  You may also specify a separate URL to the JMS server in case you intend to move the JMS resources to a separate/different server (e.g. back to AdminServer). You'll need to provide a login principal and credentials - for simplicity I usually make this the same as the WebLogic domain user, however this is not a secure practice! Make your JMS principal different from the WebLogic principal and choose a strong password, then click Next. Specify the Hot Folder(s) (comma-delimited if more than one) - this is the directory/directories that is/are monitored by ODEE for jobs to process. Click Next. If you will be setting up an SMTP server for ODEE to send emails, you may configure the connection details here. The details required are simple: hostname, port, user/password, and the sender's address (e.g. emails will appear to be sent by the address shown here so if the recipient clicks "reply", this is where it will go). Click Next. If you will be using Oracle WebCenter:Content (formerly known as Oracle UCM) you can enable this option and set the endpoints/credentials here. If you aren't sure, select False - you can always go back and enable this later. I'm almost 76% certain there will be a post sometime in the future that details how to configure ODEE + WCC:C! Click Next. If you will be using Oracle UMS for sending MMS/text messages, you can enable and set the endpoints/credentials here. As with UCM, if you're not sure, don't enable it - you can always set it later. Click Next. On this screen you can change the endpoints for the Documaker Web Service (DWS), and the endpoints for approval processing in Documaker Interactive. The deployment process for ODEE will create 3 managed WebLogic servers for hosting various Documaker components (JMS, Interactive, DWS, Dashboard, Documaker Administrator, etcetera) and it will set the ports used for each of these services. In this screen you can change these values if you know how you want to deploy these managed servers - but for now we'll just accept the defaults. Click Next. Verify the installation details and click Install. You can save the installation into a response file if you need to (which might be useful if you want to rerun this installation in an unattended fashion). Allow the installation to progress... Click Next. You can save the response file if needed (e.g. in case you forgot to save it earlier!) Click Finish. That's it, you're done with the initial installation. Have a look around the ODEE_HOME that you just installed (remember we selected c:\oracle\odee_1?) and look at the files that are laid down. Don't change anything just yet! Stay tuned for the next segment where we complete and verify the installation. 

    Read the article

  • Unable to connect to Wireless after installing Ubuntu 12.10

    - by Moulik
    I am using Asus U56E laptop and after installing Ubuntu 12.10 alongside Windows 8, I am unable to connect to the Wireless. I have been trying to solve this problem since two weeks and couldn't solve it. Please help. Any answer would be appreciated. Here are some command-line results. lspci -v | grep -iA 7 network ubuntu@ubuntu:~$ lspci -v | grep -iA 7 network 02:00.0 Network controller: Intel Corporation Centrino Wireless-N + WiMAX 6150 (rev 67) Subsystem: Intel Corporation Centrino Wireless-N + WiMAX 6150 BGN Flags: bus master, fast devsel, latency 0, IRQ 52 Memory at de800000 (64-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: iwlwifi Kernel modules: iwlwifi lsmod | grep iwlwifi ubuntu@ubuntu:~$ lsmod | grep iwlwifi iwlwifi 386826 0 mac80211 539908 1 iwlwifi cfg80211 206566 2 iwlwifi,mac80211 ubuntu@ubuntu:~$ dmesg | grep iwlwifi [ 57.846261] iwlwifi: Intel(R) Wireless WiFi Link AGN driver for Linux, in-tree: [ 57.846264] iwlwifi: Copyright(c) 2003-2012 Intel Corporation [ 57.846336] iwlwifi 0000:02:00.0: >pci_resource_len = 0x00002000 [ 57.846338] iwlwifi 0000:02:00.0: >pci_resource_base = ffffc90000c7c000 [ 57.846341] iwlwifi 0000:02:00.0: >HW Revision ID = 0x67 [ 57.846438] iwlwifi 0000:02:00.0: >irq 52 for MSI/MSI-X [ 59.558335] iwlwifi 0000:02:00.0: >loaded firmware version 41.28.5.1 build 33926 [ 59.558514] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEBUG disabled [ 59.558516] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEBUGFS enabled [ 59.558517] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEVICE_TRACING enabled [ 59.558519] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_DEVICE_TESTMODE enabled [ 59.558520] iwlwifi 0000:02:00.0: >CONFIG_IWLWIFI_P2P disabled [ 59.558522] iwlwifi 0000:02:00.0: >Detected Intel(R) Centrino(R) Wireless-N + WiMAX 6150 BGN, REV=0x84 [ 59.558583] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 59.569083] iwlwifi 0000:02:00.0: >device EEPROM VER=0x557, CALIB=0x6 [ 59.569085] iwlwifi 0000:02:00.0: >Device SKU: 0x150 [ 59.569087] iwlwifi 0000:02:00.0: >Valid Tx ant: 0x1, Valid Rx ant: 0x3 [ 59.569100] iwlwifi 0000:02:00.0: >Tunable channels: 13 802.11bg, 0 802.11a channels [ 70.208469] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 70.208648] iwlwifi 0000:02:00.0: >Radio type=0x1-0x2-0x0 [ 70.366319] iwlwifi 0000:02:00.0: >L1 Disabled; Enabling L0S [ 70.366470] iwlwifi 0000:02:00.0: >Radio type=0x1-0x2-0x0 sudo lshw -c network ubuntu@ubuntu:~$ sudo lshw -c network *-network description: Wireless interface product: Centrino Wireless-N + WiMAX 6150 vendor: Intel Corporation physical id: 0 bus info: pci@0000:02:00.0 logical name: wlan0 version: 67 serial: 40:25:c2:84:99:c4 width: 64 bits clock: 33MHz capabilities: pm msi pciexpress bus_master cap_list ethernet physical wireless configuration: broadcast=yes driver=iwlwifi driverversion=3.5.0-17-generic firmware=41.28.5.1 build 33926 latency=0 link=no multicast=yes wireless=IEEE 802.11bgn resources: irq:52 memory:de800000-de801fff *-network description: Ethernet interface product: AR8151 v2.0 Gigabit Ethernet vendor: Atheros Communications Inc. physical id: 0 bus info: pci@0000:04:00.0 logical name: eth0 version: c0 serial: 54:04:a6:2b:6a:ef capacity: 1Gbit/s width: 64 bits clock: 33MHz capabilities: pm msi pciexpress vpd bus_master cap_list ethernet physical tp 10bt 10bt-fd 100bt 100bt-fd 1000bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=atl1c driverversion=1.0.1.0-NAPI latency=0 link=no multicast=yes port=twisted pair resources: irq:54 memory:dd400000-dd43ffff ioport:a000(size=128) ifconfig ubuntu@ubuntu:~$ ifconfig eth0 Link encap:Ethernet HWaddr 54:04:a6:2b:6a:ef UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:176 errors:0 dropped:0 overruns:0 frame:0 TX packets:176 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:14368 (14.3 KB) TX bytes:14368 (14.3 KB) wlan0 Link encap:Ethernet HWaddr 40:25:c2:84:99:c4 UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) iwconfig ubuntu@ubuntu:~$ iwconfig eth0 no wireless extensions. lo no wireless extensions. wlan0 IEEE 802.11bgn ESSID:off/any Mode:Managed Access Point: Not-Associated Tx-Power=15 dBm Retry long limit:7 RTS thr:off Fragment thr:off Power Management:off iwlist scan ubuntu@ubuntu:~$ iwlist scan eth0 Interface doesn't support scanning. lo Interface doesn't support scanning. wlan0 No scan results nm-tool ubuntu@ubuntu:~$ nm-tool NetworkManager Tool State: disconnected - Device: eth0 ----------------------------------------------------------------- Type: Wired Driver: atl1c State: unavailable Default: no HW Address: 54:04:A6:2B:6A:EF Capabilities: Carrier Detect: yes Wired Properties Carrier: off - Device: wlan0 ---------------------------------------------------------------- Type: 802.11 WiFi Driver: iwlwifi State: disconnected Default: no HW Address: 40:25:C2:84:99:C4 Capabilities: Wireless Properties WEP Encryption: yes WPA Encryption: yes WPA2 Encryption: yes Wireless Access Points hypeness2: Infra, 00:21:29:DA:08:4F, Freq 2462 MHz, Rate 54 Mb/s, Strength 42 WPA love: Infra, 68:7F:74:17:02:66, Freq 2412 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 DIRECT-MwSCX-3400Pamela: Infra, 02:15:99:A3:3F:AC, Freq 2412 MHz, Rate 54 Mb/s, Strength 22 WPA2 router: Infra, 1C:AF:F7:D6:76:F3, Freq 2417 MHz, Rate 54 Mb/s, Strength 20 WPA2 wing: Infra, E8:40:F2:34:E4:F7, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 WPA WPA2 132LINKSYS: Infra, 00:1A:70:80:1F:E9, Freq 2437 MHz, Rate 54 Mb/s, Strength 57 WEP VMITTAL: Infra, E0:46:9A:3C:F0:C4, Freq 2412 MHz, Rate 54 Mb/s, Strength 27 WEP HP-Print-10-LaserJet 1025: Infra, 7C:E9:D3:7E:F8:10, Freq 2437 MHz, Rate 54 Mb/s, Strength 59 ACNBB: Infra, 00:26:75:22:A6:2F, Freq 2437 MHz, Rate 54 Mb/s, Strength 20 SATKAIVAL: Infra, 00:18:E7:CE:69:A6, Freq 2412 MHz, Rate 54 Mb/s, Strength 69 WPA WPA2 hypeness: Infra, B8:E6:25:24:C3:B1, Freq 2437 MHz, Rate 54 Mb/s, Strength 54 WPA WPA2 CSNetwork: Infra, BC:14:01:58:C5:88, Freq 2437 MHz, Rate 54 Mb/s, Strength 25 WPA WPA2 tharma: Infra, BC:14:01:E2:06:18, Freq 2412 MHz, Rate 54 Mb/s, Strength 15 WPA WPA2 Active2.4: Infra, 10:6F:3F:0E:F3:8E, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA WPA2 ACNBB: Infra, 00:26:75:58:4E:7A, Freq 2437 MHz, Rate 54 Mb/s, Strength 85 KO: Infra, BC:14:01:2E:AF:A8, Freq 2452 MHz, Rate 54 Mb/s, Strength 22 WPA WPA2 FEAR: Infra, 00:18:4D:C0:BC:58, Freq 2462 MHz, Rate 54 Mb/s, Strength 17 WPA Pamela: Infra, BC:14:01:52:F6:F8, Freq 2412 MHz, Rate 54 Mb/s, Strength 24 WPA WPA2 bvrk2: Infra, 78:CD:8E:7B:3C:79, Freq 2457 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 BELL030: Infra, D8:6C:E9:17:AF:09, Freq 2462 MHz, Rate 54 Mb/s, Strength 22 WPA2 Desai: Infra, 00:1D:7E:52:FB:C5, Freq 2437 MHz, Rate 54 Mb/s, Strength 14 WEP Sritharan: Infra, BC:14:01:E5:59:78, Freq 2462 MHz, Rate 54 Mb/s, Strength 19 WPA WPA2 PFN: Infra, 00:13:10:8B:CF:45, Freq 2437 MHz, Rate 54 Mb/s, Strength 19 WEP rfkill list all ubuntu@ubuntu:~$ rfkill list all 0: asus-wlan: Wireless LAN Soft blocked: no Hard blocked: no 1: asus-wimax: WiMAX Soft blocked: yes Hard blocked: no 2: phy0: Wireless LAN Soft blocked: no Hard blocked: no so these are some more results sudo modprobe -r iwlwifi ubuntu@ubuntu:~$ sudo modprobe -r iwlwifi sudo modprobe iwlwifi 11n_disable=1 ubuntu@ubuntu:~$ sudo modprobe iwlwifi 11n_disable=1 echo "blacklist asus_wmi" | sudo tee -a /etcmodprobe.d/blacklist.conf ubuntu@ubuntu:~$ echo "blacklist asus_wmi" | sudo tee -a /etc/modprobe.d/blacklist.conf blacklist asus_wmi echo "options iwlwifi 11n_disable=1" | sudo tee /etc/modprobe.d/iwlwifi.conf ubuntu@ubuntu:~$ echo "options iwlwifi 11n_disable=1" | sudo tee /etc/modprobe.d/iwlwifi.conf options iwlwifi 11n_disable=1 sudo modprobe -rfv iwlwifi ubuntu@ubuntu:~$ sudo modprobe -rfv iwlwifi rmmod /lib/modules/3.5.0-17-generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko rmmod /lib/modules/3.5.0-17-generic/kernel/net/mac80211/mac80211.ko rmmod /lib/modules/3.5.0-17-generic/kernel/net/wireless/cfg80211.ko sudo modprobe -v iwlwifi ubuntu@ubuntu:~$ sudo modprobe -v iwlwifi insmod /lib/modules/3.5.0-17-generic/kernel/net/wireless/cfg80211.ko insmod /lib/modules/3.5.0-17-generic/kernel/net/mac80211/mac80211.ko insmod /lib/modules/3.5.0-17-generic/kernel/drivers/net/wireless/iwlwifi/iwlwifi.ko 11n_disable=1

    Read the article

  • Building an OpenStack Cloud for Solaris Engineering, Part 1

    - by Dave Miner
    One of the signature features of the recently-released Solaris 11.2 is the OpenStack cloud computing platform.  Over on the Solaris OpenStack blog the development team is publishing lots of details about our version of OpenStack Havana as well as some tips on specific features, and I highly recommend reading those to get a feel for how we've leveraged Solaris's features to build a top-notch cloud platform.  In this and some subsequent posts I'm going to look at it from a different perspective, which is that of the enterprise administrator deploying an OpenStack cloud.  But this won't be just a theoretical perspective: I've spent the past several months putting together a deployment of OpenStack for use by the Solaris engineering organization, and now that it's in production we'll share how we built it and what we've learned so far.In the Solaris engineering organization we've long had dedicated lab systems dispersed among our various sites and a home-grown reservation tool for developers to reserve those systems; various teams also have private systems for specific testing purposes.  But as a developer, it can still be difficult to find systems you need, especially since most Solaris changes require testing on both SPARC and x86 systems before they can be integrated.  We've added virtual resources over the years as well in the form of LDOMs and zones (both traditional non-global zones and the new kernel zones).  Fundamentally, though, these were all still deployed in the same model: our overworked lab administrators set up pre-configured resources and we then reserve them.  Sounds like pretty much every traditional IT shop, right?  Which means that there's a lot of opportunity for efficiencies from greater use of virtualization and the self-service style of cloud computing.  As we were well into development of OpenStack on Solaris, I was recruited to figure out how we could deploy it to both provide more (and more efficient) development and test resources for the organization as well as a test environment for Solaris OpenStack.At this point, let's acknowledge one fact: deploying OpenStack is hard.  It's a very complex piece of software that makes use of sophisticated networking features and runs as a ton of service daemons with myriad configuration files.  The web UI, Horizon, doesn't often do a good job of providing detailed errors.  Even the command-line clients are not as transparent as you'd like, though at least you can turn on verbose and debug messaging and often get some clues as to what to look for, though it helps if you're good at reading JSON structure dumps.  I'd already learned all of this in doing a single-system Grizzly-on-Linux deployment for the development team to reference when they were getting started so I at least came to this job with some appreciation for what I was taking on.  The good news is that both we and the community have done a lot to make deployment much easier in the last year; probably the easiest approach is to download the OpenStack Unified Archive from OTN to get your hands on a single-system demonstration environment.  I highly recommend getting started with something like it to get some understanding of OpenStack before you embark on a more complex deployment.  For some situations, it may in fact be all you ever need.  If so, you don't need to read the rest of this series of posts!In the Solaris engineering case, we need a lot more horsepower than a single-system cloud can provide.  We need to support both SPARC and x86 VM's, and we have hundreds of developers so we want to be able to scale to support thousands of VM's, though we're going to build to that scale over time, not immediately.  We also want to be able to test both Solaris 11 updates and a release such as Solaris 12 that's under development so that we can work out any upgrade issues before release.  One thing we don't have is a requirement for extremely high availability, at least at this point.  We surely don't want a lot of down time, but we can tolerate scheduled outages and brief (as in an hour or so) unscheduled ones.  Thus I didn't need to spend effort on trying to get high availability everywhere.The diagram below shows our initial deployment design.  We're using six systems, most of which are x86 because we had more of those immediately available.  All of those systems reside on a management VLAN and are connected with a two-way link aggregation of 1 Gb links (we don't yet have 10 Gb switching infrastructure in place, but we'll get there).  A separate VLAN provides "public" (as in connected to the rest of Oracle's internal network) addresses, while we use VxLANs for the tenant networks. One system is more or less the control node, providing the MySQL database, RabbitMQ, Keystone, and the Nova API and scheduler as well as the Horizon console.  We're curious how this will perform and I anticipate eventually splitting at least the database off to another node to help simplify upgrades, but at our present scale this works.I had a couple of systems with lots of disk space, one of which was already configured as the Automated Installation server for the lab, so it's just providing the Glance image repository for OpenStack.  The other node with lots of disks provides Cinder block storage service; we also have a ZFS Storage Appliance that will help back-end Cinder in the near future, I just haven't had time to get it configured in yet.There's a separate system for Neutron, which is our Elastic Virtual Switch controller and handles the routing and NAT for the guests.  We don't have any need for firewalling in this deployment so we're not doing so.  We presently have only two tenants defined, one for the Solaris organization that's funding this cloud, and a separate tenant for other Oracle organizations that would like to try out OpenStack on Solaris.  Each tenant has one VxLAN defined initially, but we can of course add more.  Right now we have just a single /24 network for the floating IP's, once we get demand up to where we need more then we'll add them.Finally, we have started with just two compute nodes; one is an x86 system, the other is an LDOM on a SPARC T5-2.  We'll be adding more when demand reaches the level where we need them, but as we're still ramping up the user base it's less work to manage fewer nodes until then.My next post will delve into the details of building this OpenStack cloud's infrastructure, including how we're using various Solaris features such as Automated Installation, IPS packaging, SMF, and Puppet to deploy and manage the nodes.  After that we'll get into the specifics of configuring and running OpenStack itself.

    Read the article

  • Advice for a distracted, unhappy, recently graduated programmer? [closed]

    - by Re-Invent
    I graduated 4 months ago. I had offers from a few good places to work at. At the same time I wanted to stick to building a small software business of my own, still have some ideas with good potential, some half done projects frozen in my github. But due to social pressures, I chose a job, the pay is great, but I am half-passionate about it. A small team of smart folks building useful product, working out contracts across the world. I've started finding it extremely boring. Boring to the extent that I skip 2-3 days a week together not doing work. Neither do I spend that time progressing any of my own projects. Yes, I feel stupid at the way I'm wasting time, but I don't understand exactly why is it happening. It's as if all the excitement has been drained. What can I do about it? Long version: School - I was in third standard. Only students, 6th grade had access to computer labs. I once peeked into the lab from the little door opening. No hard-disks, MS DOS on 5 1/2 inch floppies. I asked a senior student to play some sound in BASIC. He used PLAY to compose a tune. Boy! I was so excited, I was jumping from within. Back home, asked my brother to teach me some programming. We bought a book "MODERN All About GW-BASIC for Schools & Colleges". The book had everything, right from printing, to taking input, file i/o, game programming, machine level support, etc. I was in 6th standard, wrote my first game - a wheel of fortune, rotated the wheel by manipulating 16 color palette's definition. Got internet soon, got hooked to QuickBasic programming community. Made some more games "007 in Danger", "Car Crush 2" for submission to allbasiccode archives. I was extremely excited about all this. My interests now swayed into "hacking" (computer security). Taught myself some perl, found it annoying, learnt PHP and a bit of SQL. Also taught myself Visual Basic one of the winters and wrote a pacman clone with Direct X. By the time I was in 10th standard, I created some evil tools using visual basic, php and mysql and eventually landed myself into an unpaid side-job at a government facility, building evil tools for them. It was a dream come true for crackers of that time. And so was I, still very excited. Things changed soon, last two years of school were not so great as I was balancing preps for college, work at govt. and studies for school at same time. College - College was opposite of all I had wished it to be. I imagined it to be a place where I'd spend my 4 years building something awesome. It was rather an epitome of rote learning, attendance, rules, busy schedules, ban on personal laptops, hardly any hackers surrounding you and shit like that. We had to take permissions to even introduce some cultural/creative activities in our annual schedule. The labs won't be open on weekends because the lab employees had to have their leaves. Yes, a horrible place for someone like me. I still managed to pull out a project with a friend over 2 months. Showed it to people high in the academia hierarchy. They were immensely impressed, we proposed to allow personal computers for students. They made up half-assed reasons and didn't agree. We felt frustrated. And so on, I still managed to teach myself new languages, do new projects of my own, do an intern at the same govt. facility, start a small business for sometime, give a talk at a conference I'm passionate about, win game-dev and hacking contest at most respected colleges, solve good deal of programming contest problems, etc. At the same time I was not content with all these restrictions, great emphasis on rote learning, and sheer wastage of time due to college. I never felt I was overdoing, but now I feel I burnt myself out. During my last days at college, I did an intern at a bigco. While I spent my time building prototypes for certain LBS, the other interns around me, even a good friend, was just skipping time. I thought maybe, in a few weeks he would put in some serious efforts at work assigned to him, but all he did was to find creative ways to skip work, hide his face from manager, engage people in talks if they try to question his progress, etc. I tried a few time to get him on track, but it seems all he wanted was to "not to work hard at all and still reap the fruits". I don't know how others take such people, but I find their vicinity very very poisonous to one's own motivation and productivity. Over that, the place where I come from, HRs don't give much value to what have you done past 4 years. So towards the end of out intern, we all were offered work at the bigco, but the slacker, even after not writing more than 200 lines of code was made a much better offer. I felt enraged instantly - "Is this how the corp world treats someone who does fruitful, if not extra-ordinary work form them for past 6 months?". Yes, I did try to negotiate and debate. The bigcos seem blind due to departmentalization of responsibilities and many layers of management. I decided not to be in touch with any characters of that depressing play. Probably the busy time I had at college, ignoring friends, ignoring fun and squeezing every bit of free time for myself is also responsible. Probably this is what has drained all my willingness to work for anyone. I find my day job boring, at the same time I with to maintain it for financial reasons. I feel a bit burnt out, unsatisfied and at the same time an urge to quit working for someone else and start finishing my frozen side-projects (which may be profitable). Though I haven't got much to support myself with food, office, internet bills, etc in savings. I still have my day job, but I don't find it very interesting, even though the pay is higher than the slacker, I don't find money to be a great motivator here. I keep comparing myself to my past version. I wonder how to get rid of this and reboot myself back to the way I was in school days - excited about it, tinkering, building, learning new things daily, and NOT BORED?

    Read the article

  • I Clobbered a Leopard with a Window Last Night

    - by D'Arcy Lussier
    I’ve had my 15” Mac Book Pro for a little over a year now, and its hands-down the best laptop I’ve ever owned…hardware wise. And I tried, I really really tried, to like OSX. I even bought Parallels so I could run Windows 7 and all my development tools while still trying to live in an OSX world. But in the end, I missed Windows too much. There were just too many shortcomings with OSX that kept me from being productive. For one thing, Office for Mac is *not* Office for Windows. The applications are written by different teams, and Excel on the Mac is just different enough to be painful. The VM experience was adequate, but my MBP would heat up like crazy when running it and the experience trying to get Windows apps to interact with an OSX file system was awkward. And I found I was in the VM more than I thought I’d be. iMovie is not as easy to use for doing simple movie editing as Windows Movie Maker. There’s no free blog editing software for OSX that’s on par with Windows Live Writer. And really, all I was using OSX for was Twitter (which I can use a Windows client for) and web browsing (also something Windows can provide obviously). So I had to ask myself – why am I forcing myself to use an operating system I don’t like, on a laptop that can support Windows 7? And so I paved my MBP and am happily running Windows 7 on it…and its fantastic! All the good stuff with the hardware is still there with the goodness of Win 7. Happy happy. I did run into some snags doing this though, and that’s really what this blog post is about – things to be aware of if you want to install Win 7 directly on your MBP metal. First, Ensure You Have Your Original Mac Install Disk This was a warning my buddy Dylan, who’s been running Win 7 on his MBP for a while now, gave me early on. The reason you need that original disk is that the hardware drivers you need are all located there. Apparently you can’t easily download them, so make sure you have them ahead of time. Second, Forget BootCamp The only reason you need BootCamp is if you still want the option to boot into OSX. If you don’t, then you don’t need BootCamp. In fact, you don’t even need BootCamp to install Win 7. What you *will* need though is a DVD with Win 7 burnt on it. Apple doesn’t support bootable USB drives. Well, actually they do for Mac Book Airs which don’t come with optical drives…but to get it working you’ll need to edit a system file of BootCamp so your make of MBP is included in an XML document, and even then you *still* are using BootCamp meaning you’ll be making an OSX partition. So don’t worry about BootCamp, just burn a Windows 7 disc, put it into the DVD drive, and restart your MBP. Third, Know The Secret Commands So after putting in the Windows 7 DVD and restarting your MBP, you’ll want to hold down the ‘C’ key during boot up. This tells the MBP that it should boot from the DVD drive instead of the hard drive. Interestingly, it appears you don’t have to do this if its the Mac OSX install disc (more on that in a second), but regardless – hold down C and Windows will start the install process. Next up is the partition process. You’ll notice that there’s a partition called ETI or something like that. This has to do with the drive format that Apple uses and how they partition their system drives. What I did – I blew it away! At first I didn’t, but I was told I couldn’t install Windows on the remaining space due to the different drive format. Blowing away the ETI partition (and all other partitions) allowed me to continue the Windows install. *REMEMBER –  No warranty is provided or implied, just telling you what I did and how I got it to work. Ok, so now Windows is installed and I’m rebooting. Everything looks good, but I need drivers! So I put in the OSX install DVD and run the BootCamp assistant which installs all the Windows drivers I need. Fantastic! Oh, I need to restart – no problem. OH NO, PROBLEM! I left the OSX install DVD in the drive and now the MBP wants to boot from the drive and install OSX! I’m not holding down the C key, what the heck?! Ok, well there must be a way to eject this disk…hmm…no physical button on the side…the eject button doesn’t seem to work on the keyboard…no little pin hole to insert something to force the disc out…well what the…?! It turns out, if you want to eject a disc at boot up, you need (and I kid you not) to plug a mouse into the laptop and hold down the right-click button while its booting. This ejected the disc for me. Seriously. Finally, Things You Should Be Aware Of Once you have Windows up and running there’s a few things you need to be aware of, mainly new keyboard shortcuts. For instance, on the Mac keyboard there is no Home, End, PageUp or PageDown. There’s also no obvious way to do something like select large amounts of text (like you would by holding Shift-Home at the end of a line of text for instance). So here’s some shortcuts you need to know: Home – fn + left arrow End – fn + right arrow Select a line of text as you would with the Home key – Shift + fn + left arrow Select a line of text as you would with the End key – Shift + fn + right arrow Page Up – fn + up arrow Page Down – fn + down arrow Also, you’ll notice that the awesome Mac track pad doesn’t respond to taps as clicks. No fear, this is just a setting that needs to be altered in the BootCamp control panel (that controls the Mac Hardware-specific settings within Windows, you can access it easily from the system tray icon) One other thing, battery life seems a bit lower than with OSX, but then again I’m also doing more than Twitter or web browsing on this thing now. Conclusion My laptop runs awesome now that I have Windows 7 on there. It’s obviously up to individual taste, but for me I just didn’t see benefits to living in an OSX world when everything I needed lived in Windows. And also, I finally am back to an operating system that doesn’t require me to eject a USB drive before physically removing it! It’s 2012 folks, how has this not been fixed?! D

    Read the article

  • Simplifying Human Capital Management with Mobile Applications

    - by HCM-Oracle
    By Aaron Green If you're starting to think 'mobility' is a recurring theme in your reading, you'd be right. For those who haven't started to build organisational capabilities to leverage it, it's fair to say you're late to the party. The good news: better late than never. Research firm eMarketer says the worldwide smartphone audience will total 1.75 billion this year, while communications technology and services provider Ericsson suggests smartphones will triple to 5.6 billion globally by 2019. It should be no surprise, smart phone adoption is reaching the farthest corners of the globe; the subsequent impact of enterprise applications enabled by these devices is driving business performance improvement and will continue to do so. Companies using advanced workforce analytics can add significantly to the bottom line, while impacting customer satisfaction, quality and productivity. It's a statement that makes most business leaders sit forward in their chairs. Achieving these three standards is like sipping The Golden Elixir for the business world. No-one would argue their importance. So what are 'advanced workforce analytics?' Simply, they're unprecedented access to workforce trends and performance markers. Many are made possible by a mobile world and the enterprise applications that come with it on smart devices. Some refer to it as 'the consumerisation of IT'. As this phenomenon has matured and become more widely appreciated it has impacted the spectrum of functional units within an enterprise differently, but powerfully. Whether it's sales, HR, marketing, IT, or operations, all have benefited from a more mobile approach. It has been the catalyst for improvement in, and management of, the employee experience. The net result of which is happier customers. The obvious benefits but the lesser realised impact Most people understand that mobility allows for greater efficiency and productivity, collaboration and flexibility, but how that translates into business outcomes within the various functional groups is lesser known. In actuality mobility has helped galvanise partnerships between cross-functional groups within the enterprise. Where in some quarters it was once feared mobility could fragment a workforce, its rallying cry of support is coming from what you might describe as an unlikely source - HR. As the bedrock of an enterprise, it is conceivable HR might contemplate the possible negative impact of a mobile workforce that no-longer sits in an office, at the same desks every day. After all, who would know what they were doing or saying? How would they collaborate? It's reasonable to see why HR might have a legitimate claim to try and retain as much 'perceived control' as possible. The reality however is mobility has emancipated human capital and its management. Mobility and enterprise applications are expediting decision making. Google calls it Zero Moment of Truth, or ZMOT. It enables smoother operation and can contribute to faster growth. From a collaborative perspective, with the growing use of enterprise social media, which in many cases is being driven by HR, workforce planning and the tangible impact of change is much easier to map. This in turn provides a platform from which individuals and teams can thrive. With more agility and ability to anticipate, staff satisfaction and retention is higher, and real time feedback constant. The management team can save time, energy and costs with more accurate data, which is then intelligently applied across the workforce to truly engage with staff, customers and partners. From a human capital management (HCM) perspective, mobility can help you close the loop on true talent management. It can enhance what managers can offer and what employees can provide in return. It can create nested relationships and powerful partnerships. IT and HR - partners and stewards of mobility One effect of enterprise mobility is an evolution in the nature of the relationship between HR and IT from one of service provision to partnership. The reason for the dynamic shift is largely due to the 'bring your own device' (BYOD) movement, which is transitioning to a 'bring your own application' (BYOA) scenario. As enterprise technology has in some ways reverse-engineered its solutions to help manage this situation, the partnership between IT (the functional owner) and HR (the strategic enabler) is deeply entrenched. And it has to be. The CIO and the HR leader are faced with compliance and regulatory issues and concerns around information security and personal privacy on a daily basis, complicated by global reach and varied domestic legislation. There are tens of thousands of new mobile apps entering the market each month and, unlike many consumer applications which get downloaded but are often never opened again after initial perusal, enterprise applications are being relied upon by functional groups, not least by HR to enhance people management. It requires a systematic approach across all applications in use within the enterprise in order to ensure they're used to best effect. No turning back, and no desire to With real time analytics on performance and the ability for immediate feedback, there is no turning back for managers. In my experience with Oracle, our customers' operational efficiency is at record levels. It's clear as a result of the combination of individual KPIs and organisational goals, CIOs have been able to give HR leaders the ability to build predictive models that feed into an enterprise organisations' evolving strategy. It also helps them ensure regulatory compliance much more easily. Once an arduous task, with mobile enabled automation and quality data, compliance is simpler. Their world has changed for the better. For the CIO, mobility also assists them to optimise performance. While it doesn't come without challenges, mobile-enabled applications and the native experience users have with them means employees don't need high-level technical expertise to train users. It reduces the training and engagement required from the IT team so they can focus on other things that deliver value to the bottom line; all the while lowering the cost of assets and related maintenance work by simplifying processes. Rewards of a mobile enterprise outweigh risks With mobile tools allowing us to increasingly integrate our personal and professional lives, terms like "office hours" are becoming irrelevant, so work/life balance is a cultural must. Enterprises are expected to offer tools that enable workers to access information from anywhere, at any time, from any device. Employees want simplicity and convenience but it doesn't stop at private enterprise. This is a societal shift. Governments, which traditionally have been known to be slower to adopt newer technology, are also offering support for local businesses to go mobile. Several state government websites have advice on how to create mobile apps and more. And as recently as last week the Victorian Minister for Technology Gordon Rich-Phillips unveiled his State government's ICT roadmap for the next two years, which details an increased use of the public cloud, as well as mobile communications, and improved access to online data-sets. Tech giants are investing significantly in solutions designed to simplify mobile deployment and enablement. The mobility trend is creating a wave of change in the industry and driving transformation in the enterprise. If you're not on that wave, the business risk continues to rise as your competitiveness drops. Aaron is the Vice President of HCM Strategy at Oracle Corporation where he is responsible for researching and identifying emerging trends in the practice of Human Resources and works to deliver industry-leading technology solutions. Other responsibilities include, ownership of Oracle's innovative HCM solutions across JAPAC and enabling organisations to transform and modernise their workforce tools. Follow him on Twitter @aaronjgreen

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • I'm a contract developer and I think I'm about to get screwed [closed]

    - by kagaku
    I do contract development on the side. You could say that I'm a contract developer? Considering I've only ever had one client I'd say that's not exactly the truth - more like I took a side job and needed some extra cash. It started out as a "rebuild our website and we'll pay you $10k" type project. Once that was complete (a bit over schedule, but certainly not over budget), the company hired me on as a "long term support" contractor. The contract is to go from March of this year, expiring on December 31st of this year - 10 months. Over which a payment is to be paid on the 30th of each month for a set amount. I've been fulfilling my end of the contract on all points - doing server maintenence, application and database changes, doing huge rush changes and pretty much just going above and beyond. Currently I'm in the middle of development of an iPhone mobile application (PhoneGap based) which is nearing completion (probably 3-4 weeks from submission). It has not been all peaches and flowers though. Each and every month when my paycheck comes due, there always seems to be an issue of sorts. These issues did not occur during the initial project, only during the support contract. The actual contract states that my check should be mailed out on the 30th of the month. I have received my check on time approximately once (on time being about 2-3 days within the 30th). I've received my paycheck as late as the 15th of the next month - over two weeks late. I've put up with it because I need the paycheck. There have been promises and promises of "we'll send it out on time next time! I promise" - only to receive it just as late the next month. When I ask about payment they give me a vibe like "why are you only worried about money?" - unfortunately I don't have the luxury of not worrying about money. The last straw was with my August payment, which should have been mailed on August 30th. I received it on September 12th. The reason for the delay? "USPS is delaying it man! we sent it out on the 1st!" is the reason I got. When I finally got the check in the mail, the postage on the envelope was marked September 10th - the date it was run through the postage machine. I've been outright lied to, at this point. I carry on working, because again - I need a paycheck. I orchestrated the move of our application to a new server, developed a bunch of new changes and continued work on the iPhone app. All told I probably went over my hourly allotment (I'm paid for 40 hours a month, I probably put in at least 50). On Saturday, the 1st, I gave the main contact at the company (a company of 3, by the way - this is not some big corporation) a ring and filled him in on the status of my work for the past two weeks. Unusually I hadn't heard from him since the middle of September. His response was "oh... well, that is nice and uh.. good job. well, we've been talking within the company about things and we've certainly got some decisions ahead of us..." - not verbatim but you get the idea (I hope?). I got out of this conversation that the site is not doing very well (which it's not) and they're considering pulling the plug. Crap, this contract is going to end early - there goes Christmas! Fine, that's alright, no problem. I'll get paid for the last months work and call it a day. Unfortunately I still haven't gotten last months check, and I'm getting dicked around now. "Oh.. we had problems transferring funds, we'll try and mail it out tomorrow" and "I left a VM with the finance guy, but I can't get ahold of him". So I'm getting the feeling I'm not getting paid for all the work I put in for September. This is obviously breach of contract, and I am pissed. Thinking irrationally, I considered changing all their passwords and holding their stuff hostage. Before I think it through (by the way, I am NOT going to do this, realized it would probably get me in trouble), I go and try some passwords for our various accounts. Google Apps? Oh, I'm no longer administrator here. Godaddy? Whoops, invalid password. Disqus? Nope, invalid password here too. Google Adsense / Analytics? Invalid password. Dedicated server account manager? Invalid password. Now, I have the servers root password - I just built the box last week and haven't had a chance to send the guy the root password. Wasn't in a rush, I manage the server and they never touch it. Now all of a sudden all the passwords except this one are changed; the writing is on the wall - I am out. Here's the conundrum. I have the root password, they do not. If I give them this password all the leverage I have is gone, out the door and out of my hands. During this argument of why am I not getting paid the guy sends me an email saying "oh by the way, what's the root username and password to the server?". Considering he knows absolutely nothing, I gave him an "admin" account which really has almost no rights. I still have exclusive access to the server, I just don't know where to go. I can hold their data hostage, but I'm almost positive this is the wrong thing to do. I'd consider it blackmail, regardless of whether or not I have gotten paid yet. I can "break" something on the server and give them the whole "well, if you were paying me I could fix it!" spiel. This works from a "well he's not holding their stuff hostage" point of view, but what stops them from hiring some one else to just fix the issue at hand? For all I know the guys nephew is a "l33t hax0r" and can figure it out for free. I can give in, document as much as I can and take him to small claims court. This is breach of contract, I'm not getting paid. I have a case, right? ???? Does anyone have any experience in this? What can I do? What are my options? I'm broke, I can't afford a lawyer and I can barely afford not getting this paycheck. My wife doesn't work (I work two jobs so she doesn't have to work - we have a 1 year old) and is already looking at getting a part time job to cover the bills. Long term we'll be fine, but this has pissed me off beyond belief! Help me out, I'm about to get screwed.

    Read the article

  • Having trouble compiling with GDI+ (VC++ 2008)

    - by user146780
    I just simply include gdiplus.h and get all these errors: Warning 32 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Warning 38 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Warning 49 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Warning 55 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Warning 61 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2224 Warning 68 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Warning 74 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2310 Warning 82 warning C4229: anachronism used : modifiers on data are ignored c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2321 Error 112 fatal error C1003: error count exceeds 100; stopping compilation c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 236 Error 1 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 74 Error 7 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 280 Error 8 error C4430: missing type specifier - int assumed. Note: C++ does not support default-int c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 280 Error 94 error C2761: '{ctor}' : member function redeclaration not allowed c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 195 Error 102 error C2761: '{ctor}' : member function redeclaration not allowed c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 212 Error 110 error C2761: '{ctor}' : member function redeclaration not allowed c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 231 Error 21 error C2535: 'Gdiplus::Metafile::Metafile(void)' : member function already defined or declared c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 813 Error 23 error C2535: 'Gdiplus::Metafile::Metafile(void)' : member function already defined or declared c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 820 Error 25 error C2535: 'Gdiplus::Metafile::Metafile(void)' : member function already defined or declared c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 829 Error 27 error C2535: 'Gdiplus::Metafile::Metafile(void)' : member function already defined or declared c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 923 Error 16 error C2535: 'Gdiplus::Image::Image(void)' : member function already defined or declared c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 471 Error 4 error C2470: 'IImageBytes' : looks like a function definition, but there is no parameter list; skipping apparent body c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 74 Error 89 error C2448: 'Gdiplus::Metafile::{ctor}' : function-style initializer appears to be a function definition c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 76 Error 97 error C2447: '{' : missing function header (old-style formal list?) c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 199 Error 105 error C2447: '{' : missing function header (old-style formal list?) c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 218 Error 2 error C2440: 'initializing' : cannot convert from 'const char [37]' to 'int' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 74 Error 72 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2310 Error 76 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2310 Error 80 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2321 Error 84 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2321 Error 92 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 195 Error 100 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 212 Error 108 error C2275: 'HDC' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 231 Error 60 error C2275: 'Gdiplus::MetafileHeader' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2224 Error 67 error C2275: 'Gdiplus::GpMetafile' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Error 31 error C2275: 'Gdiplus::GpImage' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Error 37 error C2275: 'Gdiplus::GpImage' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Error 48 error C2275: 'Gdiplus::GpBitmap' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Error 54 error C2275: 'Gdiplus::GpBitmap' : illegal use of this type as an expression c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Error 3 error C2146: syntax error : missing ';' before identifier 'IImageBytes' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 74 Error 6 error C2146: syntax error : missing ';' before identifier 'id' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 280 Error 73 error C2146: syntax error : missing ')' before identifier 'referenceHdc' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2310 Error 81 error C2146: syntax error : missing ')' before identifier 'referenceHdc' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2321 Error 93 error C2146: syntax error : missing ')' before identifier 'referenceHdc' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 195 Error 101 error C2146: syntax error : missing ')' before identifier 'referenceHdc' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 212 Error 109 error C2146: syntax error : missing ')' before identifier 'referenceHdc' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 231 Error 96 error C2143: syntax error : missing ';' before '{' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 199 Error 104 error C2143: syntax error : missing ';' before '{' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 218 Error 33 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Error 39 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Error 50 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Error 56 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Error 62 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2224 Error 69 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Error 75 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2310 Error 83 error C2078: too many initializers c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2321 Error 29 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Error 35 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Error 46 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Error 52 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Error 58 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2222 Error 65 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Error 71 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2309 Error 79 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2320 Error 88 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 75 Error 91 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 194 Error 99 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 211 Error 107 error C2065: 'stream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 230 Error 66 error C2065: 'metafile' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Error 28 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Error 34 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Error 45 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Error 51 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Error 57 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2222 Error 64 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2262 Error 70 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2309 Error 78 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2320 Error 87 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 75 Error 90 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 194 Error 98 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 211 Error 106 error C2065: 'IStream' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 230 Error 30 error C2065: 'image' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1133 Error 36 error C2065: 'image' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1139 Error 59 error C2065: 'header' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2224 Error 47 error C2065: 'bitmap' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1286 Error 53 error C2065: 'bitmap' : undeclared identifier c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1292 Error 12 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 443 Error 13 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 444 Error 14 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 445 Error 15 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 453 Error 41 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1244 Error 42 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1247 Error 43 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1250 Error 44 error C2061: syntax error : identifier 'PROPID' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1262 Error 9 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 384 Error 10 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 395 Error 11 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 405 Error 17 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 505 Error 18 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 516 Error 19 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 758 Error 20 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 813 Error 22 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 820 Error 24 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 829 Error 26 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusheaders.h 855 Error 40 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 1156 Error 63 error C2061: syntax error : identifier 'IStream' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2242 Error 86 error C2061: syntax error : identifier 'byte' c:\program files\microsoft sdks\windows\v7.0\include\gdipluspath.h 133 Error 5 error C2059: syntax error : 'public' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusimaging.h 74 Error 77 error C2059: syntax error : ')' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2316 Error 85 error C2059: syntax error : ')' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusflat.h 2327 Error 95 error C2059: syntax error : ')' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 198 Error 103 error C2059: syntax error : ')' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 217 Error 111 error C2059: syntax error : ')' c:\program files\microsoft sdks\windows\v7.0\include\gdiplusmetafile.h 236 I tried updating my sdk to 7.0 but it did not help. I'm not even making any calls to the API. Thanks

    Read the article

  • Can't build pyxpcom on OS X 10.6

    - by Gj
    I've been following these instructions at https://developer.mozilla.org/en/Building_PyXPCOM but getting this: $ make make export make[2]: Nothing to be done for `export'. make[4]: Nothing to be done for `export'. make[4]: Nothing to be done for `export'. /opt/local/bin/python2.5 ../../../src/config/nsinstall.py -L /usr/local/pyxpcom/build/xpcom/src -m 644 ../../../src/xpcom/src/PyXPCOM.h ../../dist/include make[3]: Nothing to be done for `export'. /opt/local/bin/python2.5 ../../../../src/config/nsinstall.py -D ../../../dist/idl /opt/local/bin/python2.5 ../../../../src/config/nsinstall.py -D ../../../dist/idl make[4]: *** No rule to make target `_xpidlgen/py_test_component.h', needed by `export'. Stop. make[3]: *** [export] Error 2 make[2]: *** [export] Error 2 make[1]: *** [export] Error 2 make: *** [default] Error 2 Any ideas? An interesting anomaly is that despite me setting the PYTHON env variable to Python 2.6, the configure and make both seem to go after the 2.5... Thanks for any advice! PS here's the configure output: $ ../src/configure --with-libxul-sdk=/Users/me/xulrunner-sdk/ loading cache ./config.cache checking host system type... i386-apple-darwin10.3.0 checking target system type... i386-apple-darwin10.3.0 checking build system type... i386-apple-darwin10.3.0 checking for mawk... (cached) gawk checking for perl5... (cached) /opt/local/bin/perl5 checking for gcc... (cached) gcc checking whether the C compiler (gcc ) works... yes checking whether the C compiler (gcc ) is a cross-compiler... no checking whether we are using GNU C... (cached) yes checking whether gcc accepts -g... (cached) yes checking for c++... (cached) c++ checking whether the C++ compiler (c++ ) works... yes checking whether the C++ compiler (c++ ) is a cross-compiler... no checking whether we are using GNU C++... (cached) yes checking whether c++ accepts -g... (cached) yes checking for ranlib... (cached) ranlib checking for as... (cached) /usr/bin/as checking for ar... (cached) ar checking for ld... (cached) ld checking for strip... (cached) strip checking for windres... no checking whether gcc and cc understand -c and -o together... (cached) yes checking how to run the C preprocessor... (cached) gcc -E checking how to run the C++ preprocessor... (cached) c++ -E checking for a BSD compatible install... (cached) /usr/bin/install -c checking whether ln -s works... (cached) yes checking for minimum required perl version >= 5.006... 5.008009 checking for full perl installation... yes checking for /opt/local/bin/python... (cached) /opt/local/bin/python2.5 checking for doxygen... (cached) : checking for whoami... (cached) /usr/bin/whoami checking for autoconf... (cached) /opt/local/bin/autoconf checking for unzip... (cached) /usr/bin/unzip checking for zip... (cached) /usr/bin/zip checking for makedepend... (cached) /opt/local/bin/makedepend checking for xargs... (cached) /usr/bin/xargs checking for pbbuild... (cached) /usr/bin/xcodebuild checking for sdp... (cached) /usr/bin/sdp checking for gmake... (cached) /opt/local/bin/gmake checking for X... (cached) no checking whether the compiler supports -Wno-invalid-offsetof... yes checking whether ld has archive extraction flags... (cached) no checking that static assertion macros used in autoconf tests work... (cached) yes checking for 64-bit OS... yes checking for minimum required Python version >= 2.4... yes checking for -dead_strip option to ld... yes checking for ANSI C header files... (cached) yes checking for working const... (cached) yes checking for mode_t... (cached) yes checking for off_t... (cached) yes checking for pid_t... (cached) yes checking for size_t... (cached) yes checking for st_blksize in struct stat... (cached) yes checking for siginfo_t... (cached) yes checking for int16_t... (cached) yes checking for int32_t... (cached) yes checking for int64_t... (cached) yes checking for int64... (cached) no checking for uint... (cached) yes checking for uint_t... (cached) no checking for uint16_t... (cached) no checking for uname.domainname... (cached) no checking for uname.__domainname... (cached) no checking for usable char16_t (2 bytes, unsigned)... (cached) no checking for usable wchar_t (2 bytes, unsigned)... (cached) no checking for compiler -fshort-wchar option... (cached) yes checking for visibility(hidden) attribute... (cached) yes checking for visibility(default) attribute... (cached) yes checking for visibility pragma support... (cached) yes checking For gcc visibility bug with class-level attributes (GCC bug 26905)... (cached) yes checking For x86_64 gcc visibility bug with builtins (GCC bug 20297)... (cached) no checking for dirent.h that defines DIR... (cached) yes checking for opendir in -ldir... (cached) no checking for sys/byteorder.h... (cached) no checking for compat.h... (cached) no checking for getopt.h... (cached) yes checking for sys/bitypes.h... (cached) no checking for memory.h... (cached) yes checking for unistd.h... (cached) yes checking for gnu/libc-version.h... (cached) no checking for nl_types.h... (cached) yes checking for malloc.h... (cached) no checking for X11/XKBlib.h... (cached) yes checking for io.h... (cached) no checking for sys/statvfs.h... (cached) yes checking for sys/statfs.h... (cached) no checking for sys/vfs.h... (cached) no checking for sys/mount.h... (cached) yes checking for sys/quota.h... (cached) yes checking for mmintrin.h... (cached) yes checking for new... (cached) yes checking for sys/cdefs.h... (cached) yes checking for gethostbyname_r in -lc_r... (cached) no checking for dladdr... (cached) yes checking for socket in -lsocket... (cached) no checking whether mmap() sees write()s... yes checking whether gcc needs -traditional... (cached) no checking for 8-bit clean memcmp... (cached) yes checking for random... (cached) yes checking for strerror... (cached) yes checking for lchown... (cached) yes checking for fchmod... (cached) yes checking for snprintf... (cached) yes checking for statvfs... (cached) yes checking for memmove... (cached) yes checking for rint... (cached) yes checking for stat64... (cached) yes checking for lstat64... (cached) yes checking for truncate64... (cached) no checking for statvfs64... (cached) no checking for setbuf... (cached) yes checking for isatty... (cached) yes checking for flockfile... (cached) yes checking for getpagesize... (cached) yes checking for localtime_r... (cached) yes checking for strtok_r... (cached) yes checking for wcrtomb... (cached) yes checking for mbrtowc... (cached) yes checking for res_ninit()... (cached) no checking for gnu_get_libc_version()... (cached) no ../src/configure: line 9881: AM_LANGINFO_CODESET: command not found checking for an implementation of va_copy()... (cached) yes checking for an implementation of __va_copy()... (cached) yes checking whether va_lists can be copied by value... (cached) no checking for C++ exceptions flag... (cached) -fno-exceptions checking for gcc 3.0 ABI... (cached) yes checking for C++ "explicit" keyword... (cached) yes checking for C++ "typename" keyword... (cached) yes checking for modern C++ template specialization syntax support... (cached) yes checking whether partial template specialization works... (cached) yes checking whether operators must be re-defined for templates derived from templates... (cached) no checking whether we need to cast a derived template to pass as its base class... (cached) no checking whether the compiler can resolve const ambiguities for templates... (cached) yes checking whether the C++ "using" keyword can change access... (cached) yes checking whether the C++ "using" keyword resolves ambiguity... (cached) yes checking for "std::" namespace... (cached) yes checking whether standard template operator!=() is ambiguous... (cached) unambiguous checking for C++ reinterpret_cast... (cached) yes checking for C++ dynamic_cast to void*... (cached) yes checking whether C++ requires implementation of unused virtual methods... (cached) yes checking for trouble comparing to zero near std::operator!=()... (cached) no checking for LC_MESSAGES... (cached) yes checking for tar archiver... checking for gnutar... (cached) gnutar gnutar checking for wget... checking for wget... (cached) wget wget checking for valid optimization flags... yes checking for gcc -pipe support... yes checking whether compiler supports -Wno-long-long... yes checking whether C compiler supports -fprofile-generate... yes checking for correct temporary object destruction order... yes checking for correct overload resolution with const and templates... no Building Python extensions using python-2.5 from /opt/local/Library/Frameworks/Python.framework/Versions/2.5 creating ./config.status creating config/autoconf.mk creating Makefile creating xpcom/Makefile creating xpcom/src/Makefile creating xpcom/src/loader/Makefile creating xpcom/src/module/Makefile creating xpcom/components/Makefile creating xpcom/test/Makefile creating xpcom/test/test_component/Makefile creating dom/Makefile creating dom/src/Makefile creating dom/test/Makefile creating dom/test/pyxultest/Makefile creating dom/nsdom/Makefile creating dom/nsdom/test/Makefile

    Read the article

  • Authoritative sources about Database vs. Flatfile decision

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • How do I prove I should put a table of values in source code instead of a database table?

    - by FastAl
    <tldr>looking for a reference to a book or other undeniably authoritative source that gives reasons when you should choose a database vs. when you should choose other storage methods. I have provided an un-authoritative list of reasons about 2/3 of the way down this post.</tldr> I have a situation at my company where a database is being used where it would be better to use another solution (in this case, an auto-generated piece of source code that contains a static lookup table, searched by binary sort). Normally, a database would be an OK solution even though the problem does not require a database, e.g, none of the elements of ACID are needed, as it is read-only data, updated about every 3-5 years (also requiring other sourcecode changes), and fits in memory, and can be keyed into via binary search (a tad faster than db, but speed is not an issue). The problem is that this code runs on our enterprise server, but is shared with several PC platforms (some disconnected, some use a central DB, etc.), and parts of it are managed by multiple programming units, parts by the DBAs, parts even by mathematicians in another department, etc. These hit their own platform’s version of their databases (containing their own copy of the static data). What happens is that every implementation, every little change, something different goes wrong. There are many other issues as well. I can’t even use a flatfile, because one mode of running on our enterprise server does not have permission to read files (only databases, and of course, its own literal storage, e.g., in-source table). Of course, other parts of the system use databases in proper, less obscure manners; there is no problem with those parts. So why don’t we just change it? I don’t have administrative ability to force a change. But I’m affected because sometimes I have to help fix the problems, but mostly because it causes outages and tons of extra IT time by other programmers and d*mmit that makes me mad! The reason neither management, nor the designers of the system, can see the problem is that they propose a solution that won’t work: increase communication; implement more safeguards and standards; etc. But every time, in a different part of the already-pared-down but still multi-step processes, a few different diligent, hard-working, top performing IT personnel make a unique subtle error that causes it to fail, sometimes after the last round of testing! And in general these are not single-person failures, but understandable miscommunications. And communication at our company is actually better than most. People just don't think that's the case because they haven't dug into the matter. However, I have it on very good word from somebody with extensive formal study of sociology and psychology that the relatively small amount of less-than-proper database usage in this gigantic cross-platform multi-source, multi-language project is bureaucratically un-maintainable. Impossible. No chance. At least with Human Beings in the loop, and it can’t be automated. In addition, the management and developers who could change this, though intelligent and capable, don’t understand the rigidity of this ‘how humans are’ issue, and are not convincible on the matter. The reason putting the static data in sourcecode will solve the problem is, although the solution is less sexy than a database, it would function with no technical drawbacks; and since the sharing of sourcecode already works very well, you basically erase any database-related effort from this section of the project, along with all the drawbacks of it that are causing problems. OK, that’s the background, for the curious. I won’t be able to convince management that this is an unfixable sociological problem, and that the real solution is coding around these limits of human nature, just as you would code around a bug in a 3rd party component that you can’t change. So what I have to do is exploit the unsuitableness of the database solution, and not do it using logic, but rather authority. I am aware of many reasons, and posts on this site giving reasons for one over the other; I’m not looking for lists of reasons like these (although you can add a comment if I've miss a doozy): WHY USE A DATABASE? instead of flatfile/other DB vs. file: if you need... Random Read / Transparent search optimization Advanced / varied / customizable Searching and sorting capabilities Transaction/rollback Locks, semaphores Concurrency control / Shared users Security 1-many/m-m is easier Easy modification Scalability Load Balancing Random updates / inserts / deletes Advanced query Administrative control of design, etc. SQL / learning curve Debugging / Logging Centralized / Live Backup capabilities Cached queries / dvlp & cache execution plans Interleaved update/read Referential integrity, avoid redundant/missing/corrupt/out-of-sync data Reporting (from on olap or oltp db) / turnkey generation tools [Disadvantages:] Important to get right the first time - professional design - but only b/c it's meant to last s/w & h/w cost Usu. over a network, speed issue (best vs. best design vs. local=even then a separate process req's marshalling/netwk layers/inter-p comm) indicies and query processing can stand in the way of simple processing (vs. flatfile) WHY USE FLATFILE: If you only need... Sequential Row processing only Limited usage append only (no reading, no master key/update) Only Update the record you're reading (fixed length recs only) Too big to fit into memory If Local disk / read-ahead network connection Portability / small system Email / cut & Paste / store as document by novice - simple format Low design learning curve but high cost later WHY USE IN-MEMORY/TABLE (tables, arrays, etc.): if you need... Processing a single db/ff record that was imported Known size of data Static data if hardcoding the table Narrow, unchanging use (e.g., one program or proc) -includes a class that will be shared, but encapsulates its data manipulation Extreme speed needed / high transaction frequency Random access - but search is dependent on implementation Following are some other posts about the topic: http://stackoverflow.com/questions/1499239/database-vs-flat-text-file-what-are-some-technical-reasons-for-choosing-one-over http://stackoverflow.com/questions/332825/are-flat-file-databases-any-good http://stackoverflow.com/questions/2356851/database-vs-flat-files http://stackoverflow.com/questions/514455/databases-vs-plain-text/514530 What I’d like to know is if anybody could recommend a hard, authoritative source containing these reasons. I’m looking for a paper book I can buy, or a reputable website with whitepapers about the issue (e.g., Microsoft, IBM), not counting the user-generated content on those sites. This will have a greater change to elicit a change that I’m looking for: less wasted programmer time, and more reliable programs. Thanks very much for your help. You win a prize for reading such a large post!

    Read the article

  • Problem with richfaces ajax datatable + buttons

    - by Schyzotrop
    Hello i have another problem with RichFaces this is my application and it shows how i want it to work : http://www.screencast.com/users/Schyzotrop/folders/Jing/media/a299dc1e-7a10-440e-8c39-96b1ec6e85a4 this is video of some glitch that i can't solve http://screencast.com/t/MDFiMGMzY the problem is that when i am trying to press any buttons on others than 1st category it won't do anything IF 1st category has less rows than the one i am calling it from from 1st category it works always i am using follwoing code in jsp for collumns : <h:form id="categoryAttributeList"> <rich:panel> <f:facet name="header"> <h:outputText value="Category Attribute List" /> </f:facet> <rich:dataTable id="table" value="#{categoryAttributeBean.allCategoryAttribute}" var="cat" width="100%" rows="10" columnClasses="col1,col2,col2,col3"> <f:facet name="header"> <rich:columnGroup> <h:column>Name</h:column> <h:column>Description</h:column> <h:column>Category</h:column> <h:column>Actions</h:column> </rich:columnGroup> </f:facet> <rich:column filterMethod="#{categoryAttributeFilteringBean.filterNames}"> <f:facet name="header"> <h:inputText value="#{categoryAttributeFilteringBean.filterNameValue}" id="input"> <a4j:support event="onkeyup" reRender="table , ds" ignoreDupResponses="true" requestDelay="700" oncomplete="setCaretToEnd(event);" /> </h:inputText> </f:facet> <h:outputText value="#{cat.name}" /> </rich:column> <rich:column filterMethod="#{categoryAttributeFilteringBean.filterDescriptions}"> <f:facet name="header"> <h:inputText value="#{categoryAttributeFilteringBean.filterDescriptionValue}" id="input2"> <a4j:support event="onkeyup" reRender="table , ds" ignoreDupResponses="true" requestDelay="700" oncomplete="setCaretToEnd(event);" /> </h:inputText> </f:facet> <h:outputText value="#{cat.description}" /> </rich:column> <rich:column filterMethod="#{categoryAttributeFilteringBean.filterCategories}"> <f:facet name="header"> <h:selectOneMenu value="#{categoryAttributeFilteringBean.filterCategoryValue}"> <f:selectItems value="#{categoryAttributeFilteringBean.categories}" /> <a4j:support event="onchange" reRender="table, ds" /> </h:selectOneMenu> </f:facet> <h:outputText value="#{cat.categoryID.name}" /> </rich:column> <h:column> <a4j:commandButton value="Edit" reRender="pnl" action="#{categoryAttributeBean.editCategoryAttributeSetup}"> <a4j:actionparam name="categoryAttributeID" value="#{cat.categoryAttributeID}" assignTo="#{categoryAttributeBean.id}" /> <a4j:actionparam name="state" value="edit" /> <a4j:actionparam name="editId" value="#{cat.categoryAttributeID}" /> </a4j:commandButton> <a4j:commandButton reRender="categoryAttributeList" value="Delete" action="#{categoryAttributeBean.deleteCategoryAttribute}"> <a4j:actionparam name="categoryAttributeID" value="#{cat.categoryAttributeID}" assignTo="#{categoryAttributeBean.id}" /> </a4j:commandButton> </h:column> <f:facet name="footer"> <rich:datascroller id="ds" renderIfSinglePage="false"></rich:datascroller> </f:facet> </rich:dataTable> <rich:panel id="msg"> <h:messages errorStyle="color:red" infoStyle="color:green"></h:messages> </rich:panel> </rich:panel> </h:form> and here is code of my backing bean @EJB private CategoryBeanLocal categoryBean; private CategoryAttribute categoryAttribute = new CategoryAttribute(); private ArrayList<SelectItem> categories = new ArrayList<SelectItem>(); private int id; private int categoryid; // Actions public void newCategoryAttribute() { categoryAttribute.setCategoryID(categoryBean.findCategoryByID(categoryid)); categoryBean.addCategoryAttribute(categoryAttribute); FacesContext.getCurrentInstance().addMessage("newCategoryAttribute", new FacesMessage("CategoryAttribute " + categoryAttribute.getName() + " created.")); this.categoryAttribute = new CategoryAttribute(); } public void editCategoryAttributeSetup() { categoryAttribute = categoryBean.findCategoryAttributeByID(id); } public void editCategoryAttribute() { categoryAttribute.setCategoryID(categoryBean.findCategoryByID(categoryid)); categoryBean.updateCategoryAttribute(categoryAttribute); FacesContext.getCurrentInstance().addMessage("newCategoryAttribute", new FacesMessage("CategoryAttribute " + categoryAttribute.getName() + " edited.")); this.categoryAttribute = new CategoryAttribute(); } public void deleteCategoryAttribute() { categoryAttribute = categoryBean.findCategoryAttributeByID(id); categoryBean.removeCategoryAttribute(categoryAttribute); FacesContext.getCurrentInstance().addMessage("categoryAttributeList", new FacesMessage("CategoryAttribute " + categoryAttribute.getName() + " deleted.")); this.categoryAttribute = new CategoryAttribute(); } // Getters public CategoryAttribute getCategoryAttribute() { return categoryAttribute; } public List<CategoryAttribute> getAllCategoryAttribute() { return categoryBean.findAllCategoryAttributes(); } public ArrayList<SelectItem> getCategories() { categories.clear(); List<Category> allCategory = categoryBean.findAllCategory(); Iterator it = allCategory.iterator(); while (it.hasNext()) { Category cat = (Category) it.next(); SelectItem select = new SelectItem(); select.setLabel(cat.getName()); select.setValue(cat.getCategoryID()); categories.add(select); } return categories; } public int getId() { return id; } public int getCategoryid() { return categoryid; } // Setters public void setCategoryAttribute(CategoryAttribute categoryAttribute) { this.categoryAttribute = categoryAttribute; } public void setId(int id) { this.id = id; } public void setCategoryid(int categoryid) { this.categoryid = categoryid; } and here is filtering bean : @EJB private CategoryBeanLocal categoryBean; private String filterNameValue = ""; private String filterDescriptionValue = ""; private int filterCategoryValue = 0; private ArrayList<SelectItem> categories = new ArrayList<SelectItem>(); public boolean filterNames(Object current) { CategoryAttribute currentName = (CategoryAttribute) current; if (filterNameValue.length() == 0) { return true; } if (currentName.getName().toLowerCase().contains(filterNameValue.toLowerCase())) { return true; } else { System.out.println("name"); return false; } } public boolean filterDescriptions(Object current) { CategoryAttribute currentDescription = (CategoryAttribute) current; if (filterDescriptionValue.length() == 0) { return true; } if (currentDescription.getDescription().toLowerCase().contains(filterDescriptionValue.toLowerCase())) { return true; } else { System.out.println("desc"); return false; } } public boolean filterCategories(Object current) { if (filterCategoryValue == 0) { getCategories(); filterCategoryValue = new Integer(categories.get(0).getValue().toString()); } CategoryAttribute currentCategory = (CategoryAttribute) current; if (currentCategory.getCategoryID().getCategoryID() == filterCategoryValue) { return true; } else { System.out.println(currentCategory.getCategoryID().getCategoryID() + "cate" + filterCategoryValue); return false; } } public ArrayList<SelectItem> getCategories() { categories.clear(); List<Category> allCategory = categoryBean.findAllCategory(); Iterator it = allCategory.iterator(); while (it.hasNext()) { Category cat = (Category) it.next(); SelectItem select = new SelectItem(); select.setLabel(cat.getName()); select.setValue(cat.getCategoryID()); categories.add(select); } return categories; } public String getFilterDescriptionValue() { return filterDescriptionValue; } public String getFilterNameValue() { return filterNameValue; } public int getFilterCategoryValue() { return filterCategoryValue; } public void setFilterDescriptionValue(String filterDescriptionValue) { this.filterDescriptionValue = filterDescriptionValue; } public void setFilterNameValue(String filterNameValue) { this.filterNameValue = filterNameValue; } public void setFilterCategoryValue(int filterCategoryValue) { this.filterCategoryValue = filterCategoryValue; } unfortunetly i can't even imagine what could cause this problem that's why i even made videos to help u understand my problem thanks for help!

    Read the article

  • Dell PowerEdge R720xd stuck in BIOS

    - by G_P
    I have a Dell PowerEdge R720xd that gets stuck in the BIOS when booting. It successfully gets past the "configuring memory" and "configuring iDRAC" screens, but once it shows the "CPLD version : 103" with the various management engine versions/patches, it just hangs. No errors messages are displayed. This started happening when we tried adding additional RAM to the machine. Since then, we tried re-seating the new memory which resulted in the same issue. Then, we took out all the new memory, and the problem persists. We have also tried pressing F2 to get into System Setup, but it just indicates "Entering System Setup" and hangs at the same point. Has anybody seen this issue before or have any ideas on what to try next? UPDATE After troubleshooting and trying to isolate the issue (stripping things down to a single CPU and single DIMM, same problem, swapping to the other CPU and a different DIMM, same problem), Dell support will be coming out to swap the system board.

    Read the article

  • What does the BIOS setting XHCI Pre-Boot Mode do?

    - by Jamie Kitson
    I have a BIOS setting called XHCI Pre-Boot Mode. If I have this enabled USB devices which aren't plugged in at boot are never recognised, if I set it to Disabled then USB devices work normally. The brief BIOS description says "Enable this option if you need USB3.0 support in DOS." Which I don't, but it also says "Please note that XHCI controller will be disabled if you set this item as Disabled." So does that mean that USB3 is disabled with this option? Here's a picture of the screen: http://www.flickr.com/photos/jamiekitson/8017563004/

    Read the article

  • The volume "filesystem root" has only 0 bytes disk space remaining?

    - by radek
    I installed 11.10 ~two weeks ago and run into some strange troubles recently. Installation was on brand new laptop with clear 128GB SSD. I opted for encrypting home directory. Apart from that I accepted defaults during the installation. There is no other OS on my laptop. I had circa 40GB in use when (for the third time) I got to see this very unpleasant window: Twice situation was pretty bad and whole system slowed down considerably. After reboot I could not login to graphical interface (with an error message informing about insufficient space) and had to remove some files from command line first. Third time I still managed to quickly delete some files and it helped. My laptop is mainly work environment: so no torrents, games, just two movies. Only media filling space are ~20GB of pictures, and bunch of pdfs. Working mostly on PostgreSQL & PostGIS, GeoServer and QGIS recently. Although I had lots of opportunities to test and practice my backups I would be extremely grateful if somebody could point me to any potential solutions to this problem. My laptop has been bought just before I installed Ubuntu, and it came without OS. Could that be hardware issue? Or is the encrypted home causing me headaches? Thanks for help! Update: As suggested by @maniat1k, here is current output of fdisk -l: WARNING: GPT (GUID Partition Table) detected on '/dev/sda'! The util fdisk doesn't support GPT. Use GNU Parted. Disk /dev/sda: 160.0 GB, 160041885696 bytes 255 heads, 63 sectors/track, 19457 cylinders, total 312581808 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sda1 1 312581807 156290903+ ee GPT

    Read the article

  • DIY a simple .inf on an existing .sys?

    - by Stijn Sanders
    In continuation of attempting to get an old Digi ST-1032 working on a new server, we've downgraded a server to Windows 2000 in an attempt to use the NT4 drivers. And it works, the old software setup works, finds the device on the SCSI bus, and connects the 32 ports to COM3..COM34 or any other set using the port remapper tool. The minor issue that remains is that Plug-and-Play still detects this device over SCSI and tries to wizard you into selecting a driver for it. Which doesn't exists (Digi Intl support confirms this device is so old, a 2000 or XP driver was never made). The exact name displayed is "DigiIntl ST-1032 SCSI Net Device" (Oddly enough, two devices get detected with this name, on two neighbouring LUN's, could one be the built-in terminator?) Is there a way to concoct a simple .inf that would (re-)register the existing sts.sys that appears to get registered by the (old) installer of the driver software?

    Read the article

  • What’s New from the Oracle Marketing Cloud at Oracle OpenWorld 2014?

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Marketing—CX Central is your hub for all things Marketing related at OpenWorld in San Francisco, September 28-October 2, 2014. Learn how to personalize the modern marketing journey to improve customer loyalty. We’re hosting more than 60 breakout sessions, half of which will highlight customer success stories from marquee brands including Bizo, Comcast, Dell, Epson, John Deere, Lane Bryant, ReadyTalk and Shutterfly. Moscone West, Levels 2 and 3 To learn more about how modern marketing works, visit Moscone West, levels 2 and 3, for exciting demos of each of the Oracle Marketing Cloud solutions (BlueKai, Compendium, Eloqua, Push I/O, and Responsys). You also can check out our stations for Vertical Marketing Best Practices, the Markie Awards, and more! CX Spotlight Sessions “Accelerating Big Profits in Big Data,” Jeff Tanner, Baylor University “Using Content Marketing to Impact Every Stage of the Buyer’s Journey,” Jennifer Agustin, Bizo “Expanding Your Marketing with Proven Testing and Optimization,” Brian Border, Shutterfly and Matthew Balthazor, Epson “Modern Marketing: The New Digital Dialogue,” Cory Treffiletti, Oracle A Special Marquee Session Dell’s Hayden Mugford will speak on “The Digital Ecosystem: Driving Experience Through Contact Engagement.” She will highlight how the organization built a digital ecosystem that supports a behaviorally driven, multivehicle nurturing campaign. The Dell 1:1 Global Marketing team worked with multiple partners to innovate integrations with Oracle Eloqua, Oracle Real-Time Decisions for real-time decision logic, and a content management system (CMS) that enables 100 percent customized e-mails. The program doubled average order values for nurtured contacts versus non-nurtured and tripled open and click-through rates versus push e-mail. Other Oracle Marketing Cloud Session Highlights Thought leadership by role Exploring the benefits of moving to the Cloud Product line roadmaps and innovations in Marketing Technical deep dives for product lines within Marketing Best practices and impactful business measurements Solutions that are Integrated across CX Target Audience Session content is geared toward professionals in Marketing, Marketing Operations, Marketing Demand Generation, Social: Chief Marketing Officers, Vice Presidents, Directors and Managers. Outcomes Customers attending Marketing—CX Central @ OpenWorld will be able to: Gain insight into delivering consistent cross-channel marketing Discover how to provide the right information to the right customer at the right time and with the right channel Get answers to burning questions and advice on business challenges Hear from other Oracle customers about recommended best practices to help their organization move forward Network and share ideas to help create a strategy for connecting with customers in better ways It Wouldn’t Be an Oracle Marketing Cloud Event Without a Party! We’re hosting CX Central Fest:  a unique customer experience specifically designed for attendees of CX Central. It will include a chance to rock out at a private concert featuring Los Angeles indie electronic pop group, Capital Cities! Join us Tuesday, September 30 from 7-9 p.m. OpenWorld is a fabulous way for your customers to see all that Oracle Marketing Cloud has to offer. Pass on an invitation today. By Laura Vogel (Oracle) /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Remotely Schedule and Stream Recorded TV in Windows 7 Media Center

    - by DigitalGeekery
    Have you ever been away from home and suddenly realized you forgot to record your favorite program? Now Windows 7 Media Center, users can schedule recordings remotely from their phones or mobile devices with Remote Potato. How it Works Remote Potato installs server software on the host computer running Windows 7 Media Center. Once the software is installed, we’ll need to do some port forwarding on the router and setup an optional dynamic DNS address. When setup is completed, we will access the application through a web based interface. Silverlight is required for Streaming recorded TV, but scheduling recordings can be done through an HTML interface. Installing Remote Potato Download and install Remote Potato on the Media Center PC. (See download link below) If you plan to stream any Recorded TV, you’ll also want to install the streaming pack located on the same page. It isn’t required to stream all shows, only shows that require the AC3 audio codec. Click Yes to allow Remote Potato to add rules to the Windows Firewall for remote access. You’ll likely need to accept a few UAC prompts. When notified that the rules were added, click OK. Remote Potato will then prompt you to allow administrator privileges to reserve a URL for it’s web server. Click Yes. Remote Potato server will start. Click on the configuration button at the right to to reveal the settings tabs.   One the General tab, you’ll have the option to run Remote Potato on startup and minimized in the System Tray. If you’re running Media Center on a dedicated HTPC, you’ll probably want to enable both startup options. Forwarding Ports on Your Router You’ll need to forward a couple ports on your router. By default, these will be ports 9080 and 9081. In this example we’re using a Linksys WRT54GL router, however, the steps for port forwarding will vary from router to router. On the Linksys configuration page, click on the Applications & Gaming Tab, and then the Port Range Forward tab. Under Application, type in a name of your choosing. In both the Start and End boxes, type the port number 9080. Enter the local IP address of your Media Center computer in the IP address column. Click the check box under Enable. Repeat the process on the next line, but this time use port 9081. When finished, click the Save Settings button. Note: It’s highly recommended that you configure the home computer running Media Center & Remote Potato with a static IP address.   Find your IP Address You’ll need to find the IP address assigned to your router from your ISP. There are many ways to do this but a quick and easy way is to visit a site like checkip.dyndns.org (link available below) The current external IP address of your router will be displayed in the browser.   Dynamic DNS This is an optional step, but  it’s highly recommended. Many routers, such as the Linksys WRT54GL we are using, support Dynamic DNS (DDNS). What Dynamic DNS allows you to do is affiliate your home router’s external IP address to a domain name. Every time your home router is assigned a a new IP address by your ISP, the domain name is updated to point to your new IP address. Remote Potato’s user interface is accessed over the Internet is by connecting to your router’s IP address followed by a colon and the port number. (Ex: XXX.XXX.XXX.XXX:9080) Instead of constantly having to look up and remember an IP address, you can use DDNS along with a 3rd party provider like DynDNS.com, to sign up for a free domain name and configure it to be updated each time your router is assigned a new IP address. Go to the DynDNS.com website (See link at the end of the article) and sign up for a free Domain name. You’ll need to register and confirm by email.   Once you’ve signed in and selected your domain name click Activate Services. You’ll get a confirmation message that your domain name has been activated.    On the Linksys WRT54GL click on the Setup tab an then DDNS. Select DynDNS.org, or TZO.com if you prefer to use their service, from the drop down list.   With DynDNS, you’ll need to fill in your username and password you signed up with at the DynDNS website and the hostname you chose. Note: You can connect over your local network with the IP Address of the computer running Remote Potato followed by a colon and the port number. Ex: 192.168.1.2:9080 Logging in Remote Potato and Recording a Show Once you connect, you’ll see the start page. To view the TV listings, click on TV Guide. You’ll then see your guide listings. There are a few ways to navigate the listings. At the top left, you can click on any of the preset time buttons to jump to  the listings at that time of the day.  Click on the arrows to the right and left of the day and date at the top center to proceed to the previous or next day. Or, jump to a specific day with the date and date buttons at the top right.   To setup a recording, click on a program.   You can choose to record the individual show or the entire series by clicking on Record Show or Record Series.   Remote Potato on Mobile Devices Perhaps the coolest feature of Remote Potato is the ability to schedule recording from your phone or mobile device. Note: For any devices or computers without Silverlight, you will be prompted to view the HTML page. Select Browse Listings. Select your program to record. In the Program Details, select Record Show to record the single episode or Record Series to record all instances of the series. You will then see a red dot on the program listing to indicate that the show is scheduled for recording.   Streaming Recorded TV Click on Recorded TV from the home screen to access your previously recorded TV programs. Click on the selection you wish to stream. Click on Play. If you receive this error message, you’ll need to install the streaming pack for Remote Potato. This is found on the same download page as installation files. (See link below) The Begin from slider allows you to start playback from the start (by default) or a different time of the program by moving the slider. The Quality (bitrate) setting  allows you to choose the quality of the playback. We found the video quality on the Normal setting to be pretty lousy, and Low was just pointless. High was the best overall viewing experience as it provided smooth quality video playback. We experienced significant stuttering during playback using the Ultra High setting.   Click Start when you are ready to begin. When playback begins you’ll see a slider at the top right.   Move the slider left or right to increase or decrease the size of the video. There’s also a button to switch to full screen.   Media Center users who travel frequently or are always on the go will likely find Remote Potato to be a blessing. Since being released earlier this year, updates for Remote Potato have come fast and furious. The latest beta release includes support for streaming music and photos. If you like those nice network TV logos, check out our article on adding TV channel logos to Windows Media Center. Downloads and Links Download Remote Potato and Streaming Pack Find your IP address Sign Up for a Domain Name at DynDNS.com Similar Articles Productive Geek Tips Schedule Updates for Windows Media CenterUsing Netflix Watchnow in Windows Vista Media Center (Gmedia)Add a Sleep Timer to Windows 7 Media CenterStartup Customizations for Media Center in Windows 7Enable Media Streaming in Windows Home Server to Windows Media Player TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 FoxClocks adds World Times in your Statusbar (Firefox) Have Fun Editing Photo Editing with Citrify Outlook Connector Upgrade Error Gadfly is a cool Twitter/Silverlight app Enable DreamScene in Windows 7 Microsoft’s “How Do I ?” Videos

    Read the article

< Previous Page | 483 484 485 486 487 488 489 490 491 492 493 494  | Next Page >