Search Results

Search found 15376 results on 616 pages for 'once'.

Page 518/616 | < Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >

  • Windows Azure Recipe: Mobile Computing

    - by Clint Edmonson
    A while back, mashups were all the rage. The idea was to compose solutions that provided aggregation and integration across applications and services to make information more available, useful, and personal. Mashups ushered in the era of Web 2.0 in all it’s socially connected goodness. They taught us that to be successful, we needed to add web service APIs to our web applications. Web and client based mashups met with great success and have evolved even further with the introduction of the internet connected smartphone. Nothing is more available, useful, or personal than our smartphones. The current generation of cloud connected mobile computing mashups allow our mobilized workforces to receive, process, and react to information from disparate sources faster than ever before. Drivers Integration Reach Time to market Solution Here’s a sketch of a prototypical mobile computing solution using Windows Azure: Ingredients Web Role – with the phone running a dedicated client application, the web role is responsible for serving up backend web services that implement the solution’s core connected functionality. Database – used to store core operational and workflow data for the solution’s web services. Access Control – this service is used to authenticate and manage users identity, roles, and groups, possibly in conjunction with 3rd identity providers such as Windows LiveID, Google, Yahoo!, and Facebook. Worker Role – this role is used to handle the orchestration of long-running, complex, asynchronous operations. While much of the integration and interaction with other services can be handled directly by the mobile client application, it’s possible that the backend may need to integrate with 3rd party services as well. Offloading this work to a worker role better distributes computing resources and keeps the web roles focused on direct client interaction. Queues – these provide reliable, persistent messaging between applications and processes. They are an absolute necessity once asynchronous processing is involved. Queues facilitate the flow of distributed events and allow a solution to send push notifications back to mobile devices at appropriate times. Training & Resources These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. Windows Azure Toolkit for Windows Phone The Windows Azure Toolkit for Windows Phone is designed to make it easier for you to build mobile applications that leverage cloud services running in Windows Azure. The toolkit includes Visual Studio project templates for Windows Phone and Windows Azure, class libraries optimized for use on the phone, sample applications, and documentation Windows Azure Toolkit for iOS The Windows Azure Toolkit for iOS is a toolkit for developers to make it easy to access Windows Azure storage services from native iOS applications. The toolkit can be used for both iPhone and iPad applications, developed using Objective-C and XCode. Windows Azure Toolkit for Android The Windows Azure Toolkit for Android is a toolkit for developers to make it easy to work with Windows Azure from native Android applications. The toolkit can be used for native Android applications developed using Eclipse and the Android SDK. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Database Mirroring on SQL Server Express Edition

    - by Most Valuable Yak (Rob Volk)
    Like most SQL Server users I'm rather frustrated by Microsoft's insistence on making the really cool features only available in Enterprise Edition.  And it really doesn't help that they changed the licensing for SQL 2012 to be core-based, so now it's like 4 times as expensive!  It almost makes you want to go with Oracle.  That, and a desire to have Larry Ellison do things to your orifices. And since they've introduced Availability Groups, and marked database mirroring as deprecated, you'd think they'd make make mirroring available in all editions.  Alas…they don't…officially anyway.  Thanks to my constant poking around in places I'm not "supposed" to, I've discovered the low-level code that implements database mirroring, and found that it's available in all editions! It turns out that the query processor in all SQL Server editions prepends a simple check before every edition-specific DDL statement: IF CAST(SERVERPROPERTY('Edition') as nvarchar(max)) NOT LIKE '%e%e%e% Edition%' print 'Lame' else print 'Cool' If that statement returns true, it fails. (the print statements are just placeholders)  Go ahead and test it on Standard, Workgroup, and Express editions compared to an Enterprise or Developer edition instance (which support everything). Once again thanks to Argenis Fernandez (b | t) and his awesome sessions on using Sysinternals, I was able to watch the exact process SQL Server performs when setting up a mirror.  Surprisingly, it's not actually implemented in SQL Server!  Some of it is, but that's something of a smokescreen, the real meat of it is simple filesystem primitives. The NTFS filesystem supports links, both hard links and symbolic, so that you can create two entries for the same file in different directories and/or different names.  You can create them using the MKLINK command in a command prompt: mklink /D D:\SkyDrive\Data D:\Data mklink /D D:\SkyDrive\Log D:\Log This creates a symbolic link from my data and log folders to my Skydrive folder.  Any file saved in either location will instantly appear in the other.  And since my Skydrive will be automatically synchronized with the cloud, any changes I make will be copied instantly (depending on my internet bandwidth of course). So what does this have to do with database mirroring?  Well, it seems that the mirroring endpoint that you have to create between mirror and principal servers is really nothing more than a Skydrive link.  Although it doesn't actually use Skydrive, it performs the same function.  So in effect, the following statement: ALTER DATABASE Mir SET PARTNER='TCP://MyOtherServer.domain.com:5022' Is turned into: mklink /D "D:\Data" "\\MyOtherServer.domain.com\5022$" The 5022$ "port" is actually a hidden system directory on the principal and mirror servers. I haven't quite figured out how the log files are included in this, or why you have to SET PARTNER on both principal and mirror servers, except maybe that mklink has to do something special when linking across servers.  I couldn't get the above statement to work correctly, but found that doing mklink to a local Skydrive folder gave me similar functionality. To wrap this up, all you have to do is the following: Install Skydrive on both SQL Servers (principal and mirror) and set the local Skydrive folder (D:\SkyDrive in these examples) On the principal server, run mklink /D on the data and log folders to point to SkyDrive: mklink /D D:\SkyDrive\Data D:\Data On the mirror server, run the complementary linking: mklink /D D:\Data D:\SkyDrive\Data Create your database and make sure the files map to the principal data and log folders (D:\Data and D:\Log) Viola! Your databases are kept in sync on multiple servers! One wrinkle you will encounter is that the mirror server will show the data and log files, but you won't be able to attach them to the mirror SQL instance while they are attached to the principal. I think this is a bug in the Skydrive, but as it turns out that's fine: you can't access a mirror while it's hosted on the principal either.  So you don't quite get automatic failover, but you can attach the files to the mirror if the principal goes offline.  It's also not exactly synchronous, but it's better than nothing, and easier than either replication or log shipping with a lot less latency. I will end this with the obvious "not supported by Microsoft" and "Don't do this in production without an updated resume" spiel that you should by now assume with every one of my blog posts, especially considering the date.

    Read the article

  • Tech Ed/BI Conference 2010: A Recovering Industry in a Recovering City

    - by andrewbrust
    I tried writing a post for this blog last night, while at the this year’s Microsoft Tech Ed and Business Intelligence conferences, in New Orleans. But I literally fell asleep while writing it.  That’s probably a sign that my readers might have done the same while reading it. Why the writer’s block? This was a very good show for me, but I think I was having trouble figuring out exactly why.  Now that I’m on the flight home, I’m starting to piece it together. One reason, for sure, was that I’ve spent years in both the developer and the BI worlds, and a show that combined the two was really enjoyable for me.  Typically, the subject matter, the attendees, the Microsoft execs and managers, and even the social circles have been separate.  This year’s Tech Ed facilitated a fusion of each of these previously segregated groups.  That was good for me as a speaker; for example, I facilitated a Birds of a Feather session on PowerPivot (Microsoft’s new self-service BI offering) which was well-attended, and by a large number of non-BI pros.  The fusion was good for me as an attendee too, as Microsoft BI, in the form of a new Pivot Viewer control, made it into the Day 1 keynote, demoed by Microsoft’s key BI champion, Amir Netz.  And it was good for me socially, as I was able to meet with peers in both camps, and at one location. Speaking of meeting with industry colleagues, I did a lot of that at this show.  Probably for the first time ever, I carefully scheduled and conducted a series of meetings with friends and business acquaintances in the developer tools, data visualization, utilities, publishing and training areas of the Microsoft ecosystem.  Beside the time efficiencies in conducting so many meetings, I discovered another benefit. I got a real handle on the tech industry’s economic health. The news here is good.  First of all, 2010 has been a great year for just about everyone I spoke to.  The mood is positive, energy is high, and people are working really hard.  This is, of course, refreshing to see, and it’s a huge relief.  Add to that the fact that this year’s Tech Ed was about 2.5 times larger in headcount than last year’s (based on numbers from unofficial, but reliable, sources), and the economic prognosis seems excellent.  But there’s more to it than that. Here’s the thing: everyone I talked to seems to be working, and succeeding, at changing their business models to adapt to changes in the industry.  Whether it’s the Internet’s impact on publishing and training, the increased importance of the developer audience in South Asia, the shift of affordable developer and business talent to unfamiliar locales abroad, or even lapses in Microsoft’s performance in the market, partner companies aren’t just rolling with the punches; they’re welcoming the changes and working them to their advantage.  No one seemed downtrodden, or even fatigued.  Even for businesses who have seen core revenue streams become commoditized, everyone seems to be changing their market strategy and winning.  Even Microsoft, of whom I have been critical recently, showed signs of successful hard work and playbook change, in the maturing of their cloud strategy, their commitment to it and their excitement around it.  And the embedded, managed, self-service BI strategy that Microsoft has been touting looks like it’s already being embraced by customers, even though PowerPivot, and other new Microsoft BI products, were released only recently. The collective optimism I have witnessed, and that I have felt, tells me good things about this industry and the economy.  The stock market had huge mood swings during my stay, and that may yet subdue the industry recovery I have seen this week.  Nonetheless, I am convinced that a strong foundation of hard work, innovative thinking and, if I may,  true renaissance is underlying this industry’s success. That kind of strength will generate a strong recovery, I am certain, whether now or once we’re past another round of choppy weather in the broader economy.  The fundamentals are good.

    Read the article

  • Why I&rsquo;m Getting an iPad

    - by andrewbrust
    I have never purchased an Apple product in my life.  That’s a “true fact.”  And, for that matter, the last Apple product I really wanted was an Apple IIe, back in the 1980s.  I couldn’t afford it though (I was in high school), so I got a Commodore 64 instead…it had the same microprocessor, after all.  If the iPhone were on Verizon, I probably would have picked one up in December, when I got my Droid.  And if the iPod Touch worked with my Napster subscription (which of course it does not, but my Sonos does) I might have picked on of those instead. That’s three strikes, but Apple’s not out.  I’ve decided I want the iPad.  Why?  Well, to start with, my birthday is March 31st…the iPad comes out on April 3rd, and my wife wanted to know what to get me.  Also, my house is a 7-minute walk from the Apple Store on West 14th Street in Manhattan.  This makes it easy to get my pre-ordered device on launch day, and get home quickly with it.  Oh, and I agreed to write an article for Redmond Magazine, the fee for which will pay for the device…that way the birthday present doesn’t have to be an extravagant expense.  Plus, I’m a contrarian, so I want to buy the one device from Apple that the fanboys have actually panned. Think those are bad reasons? How about this: I want to experience iPhone and iPad development and, although my app will probably never hit the App Store and run on the actual device, I still think owning one will help me develop something better.  i want to see if the slate form factor has good business usage scenarios.  I want to see if Business Intelligence technology on a device like this can work.  Imagine a dashboard on this thing. And, for the consumer experience, I really want a touch device on which I can surf the Web while I’m in the kitchen, or on the couch.  I don’t want the small form factor of my phone, I don’t want to use my TV, and I don’t want a keyboard that will get dirty or in my way. I don’t want to watch movies on it (my TV is good for that), so I don’t care that the iPad has a 4:3 screen.  I don’t want to read books on it, so I don’t care that the display is backlit LCD, rather than eInk. But really what I want is to understand, first hand, why people have such brand loyalty to Apple.  I know the big reasons; I’m not detached from society.  But I want to know the subtle points of what Apple does really well, and also what they do poorly.  And I’d like to know, once and for all, if Microsoft can beat Apple, if Microsoft can think the right way to beat Apple and if Microsoft should  even try to beat Apple. I expect to share my thoughts on these questions, as they develop.

    Read the article

  • Why is my RAID /dev/md1 showing up as /dev/md126? Is mdadm.conf being ignored?

    - by mmorris
    I created a RAID with: sudo mdadm --create --verbose /dev/md1 --level=mirror --raid-devices=2 /dev/sdb1 /dev/sdc1 sudo mdadm --create --verbose /dev/md2 --level=mirror --raid-devices=2 /dev/sdb2 /dev/sdc2 sudo mdadm --detail --scan returns: ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb Which I appended it to /etc/mdadm/mdadm.conf, see below: # mdadm.conf # # Please refer to mdadm.conf(5) for information about this file. # # by default (built-in), scan all partitions (/proc/partitions) and all # containers for MD superblocks. alternatively, specify devices to scan, using # wildcards if desired. #DEVICE partitions containers # auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes # automatically tag new arrays as belonging to the local system HOMEHOST <system> # instruct the monitoring daemon where to send mail alerts MAILADDR root # definitions of existing MD arrays # This file was auto-generated on Mon, 29 Oct 2012 16:06:12 -0500 # by mkconf $Id$ ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdb2[0] sdc2[1] 208629632 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: brw-rw---- 1 root disk 9, 1 Oct 30 11:06 md1 brw-rw---- 1 root disk 9, 2 Oct 30 11:06 md2 So I think all is good and I reboot. After the reboot, /dev/md1 is now /dev/md126 and /dev/md2 is now /dev/md127????? sudo mdadm --detail --scan returns: ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc2[1] sdb2[0] 208629632 blocks super 1.2 [2/2] [UU] md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: drwxr-xr-x 2 root root 80 Oct 30 11:18 md brw-rw---- 1 root disk 9, 126 Oct 30 11:18 md126 brw-rw---- 1 root disk 9, 127 Oct 30 11:18 md127 All is not lost, I: sudo mdadm --stop /dev/md126 sudo mdadm --stop /dev/md127 sudo mdadm --assemble --verbose /dev/md1 /dev/sdb1 /dev/sdc1 sudo mdadm --assemble --verbose /dev/md2 /dev/sdb2 /dev/sdc2 and verify everything: sudo mdadm --detail --scan returns: ARRAY /dev/md1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md2 : active raid1 sdb2[0] sdc2[1] 208629632 blocks super 1.2 [2/2] [UU] md1 : active raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: brw-rw---- 1 root disk 9, 1 Oct 30 11:26 md1 brw-rw---- 1 root disk 9, 2 Oct 30 11:26 md2 So once again, I think all is good and I reboot. Again, after the reboot, /dev/md1 is /dev/md126 and /dev/md2 is /dev/md127????? sudo mdadm --detail --scan returns: ARRAY /dev/md/ion:1 metadata=1.2 name=ion:1 UUID=aa1f85b0:a2391657:cfd38029:772c560e ARRAY /dev/md/ion:2 metadata=1.2 name=ion:2 UUID=528e5385:e61eaa4c:1db2dba7:44b556fb cat /proc/mdstat returns: Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] md126 : active raid1 sdc2[1] sdb2[0] 208629632 blocks super 1.2 [2/2] [UU] md127 : active (auto-read-only) raid1 sdb1[0] sdc1[1] 767868736 blocks super 1.2 [2/2] [UU] unused devices: <none> ls -la /dev | grep md returns: drwxr-xr-x 2 root root 80 Oct 30 11:42 md brw-rw---- 1 root disk 9, 126 Oct 30 11:42 md126 brw-rw---- 1 root disk 9, 127 Oct 30 11:42 md127 What am I missing here?

    Read the article

  • Ubuntu 11.04 and 10.04 hang with black screen while installing from USB disk

    - by Bill
    I've been trying to install Ubuntu 11.04 from a USB flash stick and each time I try to boot from the USB key one of two things happen: A) The screen that asks you what you would like to do (e.g. run Ubuntu from the USB key or install it) shows up and the countdown to the default option starts to count down but as soon as I either touch the keyboard (sometimes I press enter or the arrow keys to select an option) or the countdown gets to zero the screen just locks up and nothing happens no matter how long I wait. B) When I boot from the USB key the screen will flicker for a second and then go black with a flashing white underscore at the top left corner of the screen. Again it doesn't matter how long I wait, nothing happens and pressing keys doesn't do a thing. The very first time I tried to install it I got a terminal-like screen that said something about a directory called 'casper' having an error of some sort. I have tried installing from USB using both 11.04 and 10.10. I'm about to try 10.04. I have read tons of forum posts about this but so far I haven't seen anything in the solutions that apply to me. My intention is to dual boot Windows 7 and Ubuntu. I must keep Windows as I am required to use Visual Studio for one of my college courses. Right now I'm using Wubi but I really want a full install. I can't use LVPM because it doesn't work with the version of Wubi I used. So now I'm thinking my best bet is to try to get a clean install working. I'd also convert Wubi to a full install too but there's no solution as far as I've read. So could someone tell me a reason why this is happening or if there's something I can do to get around the problem? I'm using a Gateway LT2802u netbook with and Intel Atom N455 processor, 1GB RAM, Intel Graphics Media Accelerator 3150 graphics card, and a 250GB HDD. I don't have anything on my current Wubi install that I can't replace so keep in mind when answering that I don't care if I lose my current settings and files from Wubi. Thanks everyone! UPDATE I just answered my own question so in case anyone else is having this same problem using similar hardware, do the following: When I first tried installing 11.04 I used the recommended universal installer tool to create the USB live/installation disk. That caused the original problem. Note that I had already downloaded the 11.04 ISO and did not use the included downloader from the USB creator. After that failed I used the same USB creator but had it download 10.10 for me. It also failed with the same issue. I repeated this process with unetbootin as well for both versions. Finally, I downloaded the Ubuntu 10.04 ISO and used the recommended USB creator once again. There was an error while creating the USB live install so I reformatted the USB key as FAT32 and tried again. It created the USB key. I then booted from the USB flash drive and selected "Install Ubuntu" (exact wording was different). It worked! It took me through the process that you see shown in pictures on the Ubuntu website. I let it create the appropriate partitions for me and it simply worked. I did get a few errors while the system tried to restart after it installed. It hung on a terminal-like screen but I pressed ENTER and it restarted. I booted into Windows 7, it checked the disks as it sensed that I messed with a partition, then it booted into Windows normally. Now I'm going to uninstall Wubi and update my new full install of Ubuntu! I'm excited to get the benefits of a full install now. So in the end, hopefully someone can learn from what I did.

    Read the article

  • Oracle and Partners release CAMP specification for PaaS Management

    - by macoracle
    Cloud Application Management for Platforms The public release of the Cloud Application Management for Platforms (CAMP) specification, an initial draft of what is expected to become an industry standard self service interface specification for Platform as a Service (PaaS) management, represents a significant milestone in cloud standards development. Created by several players in the emerging cloud industry, including Oracle, the specification is being submitted to the OASIS standards organization (draft charter) where it will be finalized in an open development process. CAMP is targeted at application developers and deployers for self service management of their application on a Platform-as-a-Service cloud. It is closely aligned with the application development process where applications are typically developed in an Application Development Environment (ADE) and then deployed into a private or public platform cloud. CAMP standardizes the model behind an application’s dependencies on platform components and provides a standardized format for moving applications between the ADE and the cloud, and if and when desirable, between clouds. Once an application is deployed, CAMP provides users with a standardized self service interface to the PaaS offering, allowing the cloud consumer to manage the lifecycle of the application on that platform and the use of the underlying platform services. The CAMP interface includes a RESTful binding of the CAMP model onto the standard HTTP protocol, using JSON as the encoding for the model resources. The model for CAMP includes resources that represent the Application, its Components and any Platform Components that they depend on. It's important PaaS Cloud consumers understand that for a PaaS cloud, these are the abstractions that the user would prefer to work with, not Virtual Machines and the various resources such as compute power, storage and networking. PaaS cloud consumers would also not like to become system administrators for the infrastructure that is hosting their applications and component services. CAMP works on this more abstract level, and yet still accommodates platforms that are built using an underlying infrastructure cloud. With CAMP, it is up to the cloud provider whether or not this underlying infrastructure is exposed to the consumer. One major challenge addressed by the CAMP specification is that of ensuring that application deployment on a new platform is as seamless and error free as possible. This becomes even more difficult when the application may have been developed for a different platform and is now moving to a new one. In CAMP this is accomplished by matching the requirements of the application and its components to the specific capabilities of the underlying platform. This needs to be done regardless of whether there are existing pools of virtualized platform resources (such as a database pool) which are provisioned(on the basis of a schema for example), or whether the platform component is really just a set of virtual machines drawn from an infrastructure pool. The interoperability between platform clouds that CAMP offers means that a CAMP client such as an ADE can target multiple clouds with a single common interface. Applications can even be spread across multiple platform clouds and then managed without needing to create a specialized adapter to manage the components running in each cloud. The development of CAMP has been an effort by a small set of companies, but there are significant advantages to this approach. For example, the way that each of these companies creates their platforms is different enough, to ensure that CAMP can cover a wide range of actual deployments. CAMP is now entering the next phase of development under the guidance of an open standards organization, OASIS, which will likely broaden it’s capabilities. We hope is to keep it concise and minimal, however, to ease implementation and adoption. Over time there will be many different types of platform components that applications can use and which need management. CAMP at this point only includes one example of this (in an appendix) – DataBase as a Service. I am looking forward to the start of the CAMP Technical Committee in OASIS and will do my best to ensure a successful development process. Hope to see you there.

    Read the article

  • HTML5 Boilerplate template for ASP.NET with Visual Studio 2010

    - by Harish Ranganathan
    This is the 5th post in the series of HTML5 for ASP.NET Developers  Support for HTML5 in Visual Studio 2010 has been quite good with Visual Studio Service Pack 1 However, HTML5 Boilerplate template has been one of the most popular HTML5 templates out in the internet.  Now, there is one for your favorite ASP.NET Webforms as well as ASP.NET MVC 3 Projects (even for ASP.NET MVC 2).  And its available in the most optimal place, i.e. NuGet. Lets see it in action.  Let us fire up Visual Studio 2010 and create a “File – New Project – ASP.NET Web Application” and leave the default name to create the project.  The default project template creates Site.Master, Default.aspx and the Account (membership) files. When you run the project without any changes, it shows up the default Master Page with the Home and About placeholder pages. Also, just to check the rendering on devices, lets try running the same page in Windows Phone 7 Emulator.  You can download the SDK from here Clearly, it looks bad on the emulator and if we were to publish the application as is, its going to be the same experience when users browse this app. Close the browser and then switch to Visual Studio.   Right click on the project and select “Manage NuGet Packages” The NuGet Package Manager dialog opens up.  Search for HTML5 Boilerplate.  The options for MVC & Web Forms show up.  Click on Install corresponding to the “Add HTML5 Boilerplate to Web Forms” options. It installs the template in a few seconds.   Once installed, you will be able to see a lot of additional Script files and also the all important HTML5Boilerplate.Master file.  This would be the replacement for the default Site.Master.  We need to change the Content Pages (Default.aspx & other pages) to point to this Master Page.  Example <%@ <% Master Language="C#" AutoEventWireup="true" CodeBehind="Html5Boilerplate.Master.cs" Inherits="WebApplication14.SiteMaster" %> would be the setting in the Default.aspx Page. You can do a Find & Replace for Site.Master to HTML5Boilerplate.Master for the whole solution so that it is changed in all the locations. With this, we have our Webforms application ready with HTML5 capabilities.  Needless to say, we need to wire up HTML5 mark up level code, canvas, etc., further to use the actual HTML5 features, but even without that, the page is now HTML5’ed.   One of the advantages of HTML5 (here HTML5 is collectively referred for CSS3, Javascript enhancements etc.,)  is the ability to render the pages better on mobile and hand held devices. So, now when we run the page from Visual Studio, the following is what we get.  Notice the site.icon automatically added.  The page otherwise looks similar to what it was earlier. Now, when we also check this page on the Windows Phone Emulator, here below is what, we get. As you can see, we definitely get a better experience now.  Of course, this is not the only HTML5 feature that we can use.  We need to wire up additional code for using Canvas, SVG and other HTML5 features.  But, definitely, this is a good starting point. You can also install the HTML5boilerplate Template for your ASP.NET MVC 3 and ASP.NET MVC 2 from the NuGet packages and get them ready for HTML5. Cheers !!!

    Read the article

  • Issues printing through ssh tunnel and port forwarding

    - by simogasp
    I'm having some problems trying to print through a ssh tunnel. I'd like to print from my laptop to a network printer (Toshiba es453, for what matters) which is in a local network. I can reach the local network using a gateway. So far I did the following: ssh -N -L19100:<Printer_IP>:9100 <username>@<ssh_gateway> Basically i just mapped the port 19100 of my laptop directly to the input port of the printer, passing through the gateway. So far, so good. Then, i tried to install on my laptop a new printer with the GUI config tool of ubuntu, so that the new printer is on localhost at port 19100 (as APP Socket/HP Jet Direct) , then I provided the proper driver of the printer. In theory, once the tunnel is open I should be able to print from any program just selecting this printer. Of course, it does not work. :-) The document hangs in the queue with status Processing while in the shell where I set up the tunnel I get these errors on failing opening channels debug1: Local forwarding listening on ::1 port 19100. debug1: channel 0: new [port listener] debug1: Local forwarding listening on 127.0.0.1 port 19100. debug1: channel 1: new [port listener] debug1: Requesting [email protected] debug1: Entering interactive session. debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 3: new [direct-tcpip] channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44434, nchannels 4 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] channel 3: open failed: connect failed: Connection timed out debug1: channel 3: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44443, nchannels 4 channel 2: open failed: connect failed: Connection timed out debug1: channel 2: free: direct-tcpip: listening port 19100 for 195.220.21.227 port 9100, connect from ::1 port 44493, nchannels 3 debug1: Connection to port 19100 forwarding to 195.220.21.227 port 9100 requested. debug1: channel 2: new [direct-tcpip] As a further debugging test I tried the following. From a machine inside the local network I did a telnet <IP_printer> 9100, got access, wrote some random thing, closed the connection and correctly I got a print of what I had written. So the port and the ip of the printer should be correct. I tried the same from my laptop with the tunnel opened, the telnet succeeded but, again, the printer didn't print anything, getting the usual channel x: open failed: errors. I'm not a great expert on the matter, I just thought that in theory it was possible to do something like that, but maybe there is something that I didn't consider or I did wrong. Any clue? Thanks! Simone [update] As further debugging test, I tried to replicate the procedure from a machine in the local network. From that machine, I did ssh -N -L19100:<IP_printer>:9100 <username>@<ssh_gateway> (note that now the machine, the gateway and the printer are in the same local network) then I tried again the telnet test with telnet localhost 19100, I got access and everything, but I didn't get the print but the usual error channel 2: open failed: connect failed: Connection timed out Maybe I am missing some other connection to be forwarded or maybe this is not allowed by the administrators. Of course, if I connect via ssh tunneling to the local machine from my laptop through the gateway, I can successfully print using the lpr command (from the local machine). But this is what I would like to avoid (yes, I'm lazy...:-), I would like to have a more 'elegant' and transparent way to do that.

    Read the article

  • Recover Lost Form Data in Firefox

    - by Asian Angel
    Have you ever filled in a text area or form in a webpage and something happens before you can finish it? If you like the idea of recovering that lost data then you will want to have a look at the Lazarus: Form Recovery extension for Firefox. Lazarus: Form Recovery in Action For our first example we chose the comment text box area for one of the articles here at the website. As you can see we were not finished typing in the whole comment yet… Notice the “Lazarus Icon” in the lower right corner. Note: We simulated accidental tab closures for our two examples. After getting our webpage opened up again all of our text was gone. Right clicking within the text area showed two options available…”Recover Text & Recover Form”. Notice that our lost text was listed as a “sub menu”…this could be extremely useful in matching up the appropriate text to the correct webpage if you had multiple tabs open before something happened. Click on the correct text listing to insert it. So easy to finish writing our comment without having to start from zero again. In our second example we chose the sign-up form page for the website. As before we were not finished filling in the form… Getting the webpage opened back up showed the same problem as before…all the entered text was lost. This time we right clicked in the browser window area and there was that wonderful “Recover Form Command” waiting to be used. One click and… All of our lost form data was back and we were able to finish filling in the form. For those who may be interested you can disable Lazarus: Form Recovery on individual websites using the “Context Menu” for the “Status Bar Icon” Options There are three sections in the options and you should take a quick look through them to make any desired modifications in how Lazarus: Form Recovery functions. The first “Options Area” focuses on display/access for the extension. The second “Options Area” allows you to expand the type of data retained, enable removal of data within a given time frame, set up a password, disable search indexing, and enable form data retention while in “Private Browsing Mode”. The third “Options Area” focuses on the Lazarus database itself. Conclusion If you have ever lost text area or form data before then you know how much time could be lost in starting over. Lazarus: Form Recovery helps provide a nice backup solution to get you up and running once again with a minimum of effort. Links Download the Lazarus: Form Recovery extension (Mozilla Add-ons) Download the Lazarus: Form Recovery extension (Extension Homepage) Similar Articles Productive Geek Tips Quick Tip: Resize Any Textbox or Textarea in FirefoxWhy Doesn’t AutoComplete Always Work in Firefox?Pass Variables between Windows Forms Windows without ShowDialog()Using Secure Login in FirefoxAdd Search Forms to the Firefox Search Bar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Looking for Good Windows Media Player 12 Plug-ins? Find Out the Celebrity You Resemble With FaceDouble Whoa ! Use Printflush to Solve Printing Problems Icelandic Volcano Webcams Open Multiple Links At One Go

    Read the article

  • Combine the Address & Search Bars in Firefox

    - by Asian Angel
    The Search Bar in Firefox is very useful for finding additional information or images while browsing but the UI space it takes up can be frustrating at times. Now you can reclaim that UI space and still have access to all that searching goodness with the Foobar extension. Note: This is about the Foobar Firefox extension and not to be confused with Foobar2000 the open source music player. Before If you have the “Search Bar” displayed there is no doubt that it is taking up valuable space in your browser’s UI. What you need is the ability to reclaim that UI space and still have the same access to your search capability as before…no more sacrificing one for a gain with the other. After As soon as you have installed the extension you can see that the top part of your browser will look much sleeker without the “Search Bar” to clutter it up. The “Search Engine Icon” will now be visible inside of your “Address Bar” as seen here. You will be able to access the same “Search Engine Menu” as before by clicking on the “Search Engine Icon”. There are two display modes for search results (setting available in the “Options”). The first one shown here is “Simple Mode” where all results are in a condensed format. Notice that not only are there search suggestions but also “Bookmarks & History” listings as well. You can literally get the best of both when conducting a search. Note: The number of entries for search suggestions and bookmark/history listings can be adjusted higher or lower in the “Options”. The second one is “Rich Mode” where the results are shown with more details. Choose the “mode” that best suits your personal style. For our first example you can see the results when we conducted a quick search on “Windows 7” (using the first of the three offerings shown from Bing). Our second example was a search for “Flowers” using our Photobucket search engine. Once again nice results opened in a new tab for us. Options The options are easy to go through. It is really nice to be able to choose the number of results that you want displayed and the format that you want them shown in. Note: Changing the “Suggestion popup style” will require a browser restart to take effect. Conclusion If you love using the “Search Bar” in Firefox but want to reclaim the UI space then you will definitely want to add this extension to your browser. The ability to customize the number of results and choose the formatting make this extension even better. Links Download the Foobar extension (Mozilla Add-ons) Similar Articles Productive Geek Tips Combine the Address Bar and Progress Bar Together in FirefoxHide Some or All of the GUI Bars in FirefoxEnable Partial Match AutoComplete in the Firefox Address BarQuick Firefox UI TweaksAdd Search Forms to the Firefox Search Bar TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Scan your PC for nasties with Panda ActiveScan CleanMem – Memory Cleaner AceStock – The Personal Stock Monitor Add Multiple Tabs to Office Programs The Wearing of the Green – St. Patrick’s Day Theme (Firefox) Perform a Background Check on Yourself

    Read the article

  • IIS not starting: The process cannot access the file because it is being used by another process

    - by Rick Strahl
    Ok, apparently a few people knew about this issue, but it is new to me and has caused me nearly an hour to track down today. What happened is that I’ve been working all day doing some final pre-deployment testing of several tools on my local dev machine. In the process I’ve been starting and stopping several IIS 7 Web sites. At some point I was done and just wanted to start my Default Web Site again and found this  little gem of an error message popping up: The process cannot access the file because it is being used by another process. (Exception from HRESULT: 0x80070020) A lot of headless running around ensued after this, trying to figure out why IIS wouldn’t start. Oddly some sites started right up, others didn’t. I killed INetInfo, all worker processes, tried IISReset a million times and even rebooted – all to no avail. What gives? Skype, you evil Bastard! As it turns out the culprit is – drum roll please - Skype!  What, you may ask, does Skype have to do with IIS and Web Requests? It looks like recent versions of Skype have an option to run over Port 80 and 443 to allow running over corporate firewalls. Which is actually a nice feature that lets Skype work just about anywhere. What’s not so cool is that IIS fails to start up when another application is already using the same port that a Web site is mapped to. In the case of my dev site that’d be port 80 and Skype was hogging it. To fix this issue you can stop Skype from using port 80 and 443 which quickly fixes the problem. Or stop Skype. Duh! To permanently fix the problem in Skype find the option on the Options | Connection tab and uncheck the Use port 80/443 option: Oddly I haven’t run into this problem even though my setup hasn’t changed in quite some time. It appears that it’s bad startup timing that causes this problem to occur. Whatever the circumstance was, Skype somehow ended up starting before IIS.  If Skype is started after IIS has started it will automatically opt for other ports and not use port 80 and so there’s no problem. It’s easy to demonstrate this behavior if you’re looking for it: Stop IIS Stop Skype Start Skype and make a test call Start IIS And voila your error is ready for you! This really shouldn’t be a problem except that it would be really nice if IIS could give a more helpful error message when it can fire up a site because a port is blocked. “The process cannot access a file” is really not a very helpful error message in this scenario… I/O port / file ah what the heck it’s all the same to Windows. Right! I’ve run into this situation quite a bit with other, albeit more obvious applications like running Apache on the local machine for testing and then trying to run an IIS application. Same situation,  although it’s been a while – pre IIS 7 and I think previous versions of IIS actually gave more useful error messages for port blockages and that would be helpful. On the way to figuring this out I ran into some pretty humorous forum posts though with people ragging on why the hell you would be running IIS. Or Skype. The misinformed paranoia police out in full force so to say :-). It’ll be nice to start running IIS Express once Visual Studio 2010 SP1 gets released. Anyway, no surprise that Skype didn’t jump out at me as the culprit right away and I was left fumbling for a while until the Internet came to the rescue. I’m not the first to have found this for sure – I posted a message on Twitter and dozens of people replied they’d run into this before as well. Seems worth mentioning again though – since I’m sure to forget that this happened in a year from now when I hit that same error. Maybe I’ll even find this blog post to remind me…© Rick Strahl, West Wind Technologies, 2005-2011Posted in IIS7  Windows  

    Read the article

  • Exposed: Fake Social Marketing

    - by Mike Stiles
    Brands and marketers who want to build their social popularity on a foundation of lies are starting to face more of an uphill climb. Fake social is starting to get exposed, and there are a lot of emperors getting caught without any clothes. Facebook is getting ready to do a purge of “Likes” on Pages that were a result of bots, fake accounts, and even real users who were duped or accidentally Liked a Page. Most of those accidental Likes occur on mobile, where it’s easy for large fingers to hit the wrong space. Depending on the degree to which your Page has been the subject of such activity, you may see your number of Likes go down. But don’t sweat it, that’s a good thing. The social world has turned the corner and assessed the value of a Like. And the verdict is that a Like is valuable as an opportunity to build a real relationship with a real customer. Its value pales immensely compared to a user who’s actually engaged with the brand. Those fake Likes aren’t doing you any good. Huge numbers may once have impressed, but it’s not fooling anybody anymore. Facebook’s selling point to marketers is the ability to use a brand’s fans to reach friends of those fans. Consequently, there has to be validity and legitimacy to a fan count. Speaking of mobile, Trademob recently reported 40% of clicks are essentially worthless, because 22% of them are accidental (again with the fat fingers), while 18% are trickery. Publishers will but huge banner ads next to tiny app buttons to increase the odds of an accident. Others even hide a banner behind another to score 2 clicks instead of 1. Pontiflex and Harris Interactive last year found 47% of users were more likely to click a mobile ad accidentally than deliberately. Beyond that, hijacked devices are out there manipulating click data. But to what end for a marketer? What’s the value of a click on something a user never even saw? What’s the value of a seen but accidentally clicked ad if there’s no resulting transaction? Back to fake Likes, followers and views; they’re definitely for sale on numerous sites, none of which I’ll promote. $5 can get you 1,000 Twitter followers. You can even get followers targeted by interests. One site was set up by an unemployed accountant out of his house in England. He gets them from a wholesaler in Brooklyn, who gets them from a 19-year-old supplier in India. The unemployed accountant is making $10,000 a day. That means a lot of brands, celebrities and organizations are playing the fake social game, apparently not coming to grips with the slim value of the numbers they’re buying. But now, in addition to having paid good money for non-ROI numbers, there’s the embarrassment factor. At least a couple of sites have popped up allowing anyone to see just how many fake and inactive followers you have. Britain’s Fake Follower Check and StatusPeople are the two getting the most attention. Enter any Twitter handle and the results are there for all to see. Fake isn’t good, period. “Inactive” could be real followers, but if they’re real, they’re just watching, not engaging. If someone runs a check on your Twitter handle and turns up fake followers, does that mean you’re suspect or have purchased followers? No. Anyone can follow anyone, so most accounts will have some fakes. Even account results like Barack Obama’s (70% fake according to StatusPeople) and Lady Gaga’s (71% fake) don’t mean these people knew about all those fakes or initiated them. Regardless, brands should realize they’re now being watched, and users are judging the legitimacy of their social channels. Use one of any number of tools available to assess and clean out fake Likes and followers so that your numbers are as genuine as possible. And obviously, skip the “buying popularity” route of social marketing strategy. It doesn’t work and it gets you busted…a losing combination.

    Read the article

  • Application Performance: The Best of the Web

    - by Michaela Murray
    Wisdom A deep understanding and realization […] resulting in the ability to apply perceptions, judgements and actions. It is also the comprehension of what is true coupled with optimum judgment as to action. - Wikipedia We’re writing a book for ASP.NET developers, and we want you to be a part of it. We know that there’s a huge amount of web developer wisdom that never gets shared, and we want to find those golden nuggets of knowledge and experience, and make sure everyone can learn from them. Right now, we want to find out about your top tips, hard-won lessons, and sage advice for avoiding, finding, and fixing application performance problems. If you work with .NET and SQL, even better – a lot of application performance relies on the interaction with the database, so we want to hear from you! “How Do You Want Me To Be Involved?” Right! Details! We want you, our most excellent readers, to email us with the Best Advice you would give to other developers for getting the best performance out of their applications. It doesn’t matter if your advice is for newbies or veterans, .NET or SQL – so long as it’s about application performance, we want to hear from you. (And if you think that there’s developer wisdom out there that “everyone knows”, a) I’m willing to bet you could find someone who doesn’t know about it, and b) it probably bears repeating anyway!) “I’m Interested. What Can You Do For Me?” Excellent question. For starters, there’s a chance to win a Microsoft Surface (the tablet, not the table-top). Once all the ASP.NET Wisdom has been collected, tallied, and labelled, it will then be weighed and measured by a team of expert judges (whose identities are still a closely-guarded secret).  The top tip in both SQL & .NET categories will each win their author their very own MS Surface. But that’s not all! We can also give you… immortality! More details? Ok. We’ll be collecting all of the tips sent in by our readers (and we can’t wait to learn from you all,) and with the help of our Simple-Talk editors, we will publish and distribute your combined and documented knowledge as a free, community-created, professionally typeset eBook. You will naturally be credited by name / pseudonym / twitter handle / GitHub username / StackOverflow profile / Whatever, as the clearly ingenious author of hot performance tips. The Not-Very-Fine Print Here’s the breakdown: We want to bring together the best application performance knowledge from ASP.NET developers. Closing date for submissions will be 9am GMT, December 4th. Submissions should be made by email – [email protected] Submissions will be judged by a panel of expert judges (who will be revealed soon). The top submission in both the SQL & .NET categories will each win a Microsoft Surface. ALL the tips which make it through the judging process will be polished by Simple-Talk editors, and turned into a professionally typeset eBook, which will be freely available, and promoted alongside the ANTS Performance Profiler tool. Anyone whose entry makes it into the book will be clearly and profusely credited in the method of their choice (or can remain anonymous.) The really REALLY short version Share what you know about ASP.NET application performance for a chance to win a Microsoft Surface, and then get your name credited in a slick eBook with top-notch production values. For more details, see above. We can’t wait to learn from you!

    Read the article

  • Visual Studio Little Wonders: Quick Launch / Quick Access

    - by James Michael Hare
    Once again, in this series of posts I look at features of Visual Studio that may seem trivial, but can help improve your efficiency as a developer. The index of all my past little wonders posts can be found here. Well, my friends, this post will be a bit short because I’m in the middle of a bit of a move at the moment.  But, that said, I didn’t want to let the blog go completely silent this week, so I decided to add another Little Wonder to the list for the Visual Studio IDE. How often have you wanted to change an option or execute a command in Visual Studio, but can’t remember where the darn thing is in the menu, settings, etc.?  If so, Quick Launch in VS2012 (or Quick Access in VS2010 with the Productivity Power Tools extension) is just for you! Quick Launch / Quick Access – find a command or option quickly For those of you using Visual Studio 2012, Quick Launch is built right into the IDE at the top of the title bar, near the minimize, maximize, and close buttons: But do not despair if you are using Visual Studio 2010, you can get Quick Access from the Productivity Power Tools extension.  To do this, you can go to the extension manager: And then go to the gallery and search for Productivity Power Tools and install it.  If you don’t have VS2012 yet, then the Productivity Power Tools is the next best thing.  This extension updates VS2010 with features such as Quick Access, the Solution Navigator, searchable Add Reference Dialog, better tab wells, etc.  I highly recommend it! But back to the topic at hand!  In VS2012 Quick Launch is built into the IDE and can be accessed by clicking in the Quick Launch area of the title bar, or by pressing CTRL+Q.  If you have VS2010 with the PPT installed, though, it is called Quick Access and is accessible through View –> Quick Access: Regardless of which IDE you are using, the feature behaves mostly the same.  It allows you to search all of Visual Studio’s commands and options for a particular topic.  For example, let’s say you want to change from tabs to tabs expanded to spaces, but don’t remember where that option is buried.  You can bring up Quick Launch / Quick Access and type in “tabs”: And it brings up a list of all options on tabs, you can then choose the one appropriate to you and click on it and it will take you right there! A lot easier than diving through the options tree to find what you are looking for!  It also works on menu commands, for example if you can’t remember how to open the Output window: It shows you the menu items that will get you to the Output window, and (if applicable) the keyboard shortcuts.  Again, clicking on one of these will perform the action for you as well. There are also some tasks you can perform directly from Quick Launch / Quick Access.  For example, perhaps you are one of those people who like to have the line numbers in your editor (I do), so let’s bring up Quick Launch / Quick Access and type “line numbers”: And let’s select Turn Line Numbers On, and now our editor looks like: And Voila!  We have line numbers in VS2010.  You can do this in VS2012 too, but it takes you to the option settings instead of directly turning them off and on.  There are bound to be differences between the way the two editors organize settings and commands, but you get the point. So, as you can see, the Quick Launch / Quick Access feature in Visual Studio makes it easy to jump right to the options, commands, or tasks you are interested in without all the digging. Summary An IDE as powerful as Visual Studio has so many options and commands that it can be confusing to remember how to find and invoke them.  Quick Launch (Quick Access in VS2010 with Productivity Power Tools extension) is a quick and handy way to jump to any of these options, commands, or tasks quickly without having to remember in what menu or screen they are buried!  Technorati Tags: C#,CSharp,.NET,Little Wonders,Visual Studio,Quick Access,Quick Launch

    Read the article

  • JD Edwards Apps in a Box - Update

    - by Hartmut Wiese
    Summary and clarification JD Edwards Apps in a box is a Partner offering to the customer. We as Oracle have a huge interest in getting a successful offering to the market and we help the Partner building their offering. We provide components like JD Edwards EnterpriseOne and the Hardware. The Business Partner adds the installation services and position this as a solution to the market for a single price. As you know JD Edwards EnterpriseOne can run on multiple hardware platforms. Linux/X-86 version As you all know we do have JD Edwards VM Templates available from Oracle for the X-86 architecture. Each Partner should or is already able to install JD Edwards EnterpriseOne using these images from our software delivery cloud. We built a master bill of material for a X3-2 Hardware configuration now. It has been uploaded on the Community Workspace now. This is a SUGGESTION and limited to 50 Users MAX. However I strongly recommend you to do a sizing as usual and verify the configuration for each opportunity individually. T4-1/X3-2 version Oracle is not providing similar images for the T4-1 SPARC / SOLARIS architecture. There is an Optimized Solution Team inside Oracle who has created an Optimized Solution for JD Edwards some time ago. They created a whitepaper which is still available to download. This whitepaper was used as a starting point however we decided to build a new version of it using the latest Software and Hardware available. This has now been finalized and we are happy to provide this to our partners. This image is more a service we provide for each partner which they can reuse and extend based on their individual offerings. It is not an official supported Oracle Product and cannot be used to deploy to customers immediately. You cannot resell “JDE in a box”. You can use these images to save time while building your own Go-to-Market offering. You might want to add functionality like Mobility. It is also not complete as also the Deployment Server needs to be configured individually at the customer site. We will create some documentation about: what this images contains (and what not)? what final installation activities needs to be provided by each VAD/Partner in this process?  I will send an email to the community once we are ready to share it. You find these assets than in the Community Workspace. The Business Model with Oracle Hardware For those who have not done any Hardware business with Oracle yet: Usually a HW reseller orders the hardware through a Value Add Distributors (VAD) and not from Oracle directly. Each Partner needs to have Hardware Resell rights to do so. The VAD is assembling the boxes according to the needs of each customer. It is easily possible for them to prepare the boxes with the images we/you provide. However the final configuration is something a reseller/implementer needs to do at the customer site. This process is not the same in the EMEA region. Sometimes a VAD are taking the order but they do not see the Hardware at all. In those cases a VAD cannot provide any help with the pre-loading of any images and the reseller/implementer needs to do that. In some countries we do not have VADs at all.

    Read the article

  • Tips &amp; Tricks: How to crawl a SSL enabled Oracle E-Business Suite

    - by Rajesh Ghosh
    Oracle E-Business Suite can be integrated with Oracle Secure Enterprise Search for a superior end user experience and enhanced data retrieval capabilities. Before end-users can perform search operations, data has to be crawled and indexed into Oracle SES server. However if the Oracle E-Business Suite instance is on SSL, some additional configurations are needed in Oracle SES server as well as in Oracle Search Modeler, before a search object can be deployed and crawled. The process involves the following steps: Step 1: Export the SSL certificate of Oracle E-Business Suite Access the Oracle E-business Suite instance from a web browser. You should be able to locate a security or certificate icon somewhere in the browser toolbar or status bar, depending on which browser you are using. Click on it and you should be able to view the certificate as well as export it to a local file. While exporting make sure that you use “DER encoded” format. Step 2: Import the SSL certificate into Oracle Secure Enterprise server’s java key-store Oracle SES (10.1.8.4) by default ships a JDK under $ORACLE_HOME. The Oracle SES mid-tier uses this jdk to start the oc4j container services. In this step the Oracle E-Business Suite’s SSL certificate which has been exported in step #1, has to be imported into the Oracle SES server’s java key store. Perform the following: Copy the certificate file onto the server where Oracle SES server is running; under $ORACLE_HOME/jdk/jre/lib/security/cacerts. “ORACLE_HOME” points to the Oracle SES oracle home. Set the JAVA_HOME environment variable to $ORACLE_HOME/jdk. Append $JAVA_HOME/bin to the PATH environment variable Issue the command :  “keytool -import -keystore keystore.jks -trustcacerts -alias myOHS –file ebs.crt” . Please substitute “ebs.crt” with the name of the certificate file you copied in step #2.1. The default key-store password “changeit”. Enter the same when prompted. If successful this process will end with a message saying “certificate successfully imported”. Step 3: Import the SSL certificate into Search Modeler java key-store Unlike Oracle SES, Search Modeler is not shipped with a bundled JDK. If you are using standalone OC4J, then you actually use an external JDK to start the oc4j container services. If you are using IAS instance then the JDK comes bundled with the IAS installation. Perform the following: Copy the certificate file onto the server where Search Modeler application is running; under $JDK_HOME/jre/lib/security/cacerts. “JDK_HOME” points to the JDK directory depending on whether you are using external JDK or a bundled one. Set the JAVA_HOME environment variable to JDK directory. Append $JAVA_HOME/bin to the PATH environment variable Issue the command :  “keytool -import -keystore keystore.jks -trustcacerts -alias myOHS –file ebs.crt” . Please substitute “ebs.crt” with the name of the certificate file you copied in step #3.1. The default key-store password “changeit”. Enter the same when prompted. If successful this process will end with a message saying “certificate successfully imported”. Once you have completed the above steps successfully, you can deploy the search objects using Search Modeler and then start crawling them as well.

    Read the article

  • Creating a Synchronous BPEL composite using File Adapter

    - by [email protected]
    By default, the JDeveloper wizard generates asynchronous WSDLs when you use technology adapters. Typically, a user follows these steps when creating an adapter scenario in 11g: 1) Create a SOA Application with either "Composite with BPEL" or an "Empty Composite". Furthermore, if  the user chooses "Empty Composite", then he or she is required to drop the "BPEL Process" from the "Service Components" pane onto the SOA Composite Editor. Either way, the user comes to the screen below where he/she fills in the process details. Please note that the user is required to choose "Define Service Later" as the template. 2) Creates the inbound service and outbound references and wires them with the BPEL component:     3) And, finally creates the BPEL process with the initiating <receive> activity to retrieve the payload and an <invoke> activity to write the payload.     This is how most BPEL processes that use Adapters are modeled. And, if we scrutinize the generated WSDL, we can clearly see that the generated WSDL is one way and that makes the BPEL process asynchronous (see below)   In other words, the inbound FileAdapter would poll for files in the directory and for every file that it finds there, it would translate the content into XML and publish to BPEL. But, since the BPEL process is asynchronous, the adapter would return immediately after the publish and perform the required post processing e.g. deletion/archival and so on.  The disadvantage with such asynchronous BPEL processes is that it becomes difficult to throttle the inbound adapter. In otherwords, the inbound adapter would keep sending messages to BPEL without waiting for the downstream business processes to complete. This might lead to several issues including higher memory usage, CPU usage and so on. In order to alleviate these problems, we will manually tweak the WSDL and BPEL artifacts into synchronous processes. Once we have synchronous BPEL processes, the inbound adapter would automatically throttle itself since the adapter would be forced to wait for the downstream process to complete with a <reply> before processing the next file or message and so on. Please see the tweaked WSDL below and please note that we have converted the one-way to a two-way WSDL and thereby making the WSDL synchronous: Add a <reply> activity to the inbound adapter partnerlink at the end of your BPEL process e.g.   Finally, your process will look like this:   You are done.   Please remember that such an excercise is NOT required for Mediator since the Mediator routing rules are sequential by default. In other words, the Mediator uses the caller thread (inbound file adapter thread) for processing the routing rules. This is the case even if the WSDL for mediator is one-way.

    Read the article

  • LSI SAS 9240-8i on Ubuntu 12.04 Hangs on Modprobe

    - by Francois Stark
    I used the LSI 9240-8i card on a smaller Intel motherboard with no problems in Ubuntu, with ZFS. However, we rebuilt the server to allow for more disks, using the ASROCK X79 Extreme 11 motherboard. It has 7 PCIe slots, and a LSI 2008 on-board. At first I thought the LSI 9240, when plugged in to PCIe, clashed with the on-board LSI 2008. Every time I plugged in the LSI 9240, modprobe would hang. Then I completely disabled the on-board LSI 2008, and the problem persisted. Last night it booted perfectly ONCE - all LSI cards and connected disks visible... However, all subsequent reboots failed. Both LSI cards' bios scans appear and they both see the disks connected to them, but Ubuntu modprobe hangs. Some selected dmesg lines, with both LSI cards enabled: [ 190.752100] megasas: [ 0]waiting for 1 commands to complete [ 195.772071] megasas: [ 5]waiting for 1 commands to complete [ 200.792079] megasas: [10]waiting for 1 commands to complete [ 205.812078] megasas: [15]waiting for 1 commands to complete [ 210.832037] megasas: [20]waiting for 1 commands to complete [ 215.852077] megasas: [25]waiting for 1 commands to complete [ 220.872072] megasas: [30]waiting for 1 commands to complete [ 225.892078] megasas: [35]waiting for 1 commands to complete [ 230.912086] megasas: [40]waiting for 1 commands to complete [ 235.932075] megasas: [45]waiting for 1 commands to complete [ 240.306157] usb 2-1.5: USB disconnect, device number 7 [ 240.952076] megasas: [50]waiting for 1 commands to complete [ 240.960034] INFO: task modprobe:233 blocked for more than 120 seconds. [ 240.960055] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. [ 240.960067] modprobe D ffffffff81806200 0 233 146 0x00000004 [ 240.960075] ffff880806ae3b48 0000000000000086 ffff880806ae3ae8 ffffffff8101adf3 [ 240.960083] ffff880806ae3fd8 ffff880806ae3fd8 ffff880806ae3fd8 0000000000013780 [ 240.960090] ffffffff81c0d020 ffff880806acae00 ffff880806ae3b58 ffff880808961720 [ 240.960096] Call Trace: [ 240.960107] [<ffffffff8101adf3>] ? native_sched_clock+0x13/0x80 [ 240.960116] [<ffffffff816579cf>] schedule+0x3f/0x60 [ 240.960137] [<ffffffffa00093f5>] megasas_issue_blocked_cmd+0x75/0xb0 [megaraid_sas] [ 240.960144] [<ffffffff8108aa50>] ? add_wait_queue+0x60/0x60 [ 240.960154] [<ffffffffa000a6c9>] megasas_get_seq_num+0xd9/0x260 [megaraid_sas] [ 240.960164] [<ffffffffa000ab31>] megasas_start_aen+0x31/0x60 [megaraid_sas] [ 240.960174] [<ffffffffa00136f1>] megasas_probe_one+0x69a/0x81c [megaraid_sas] [ 240.960182] [<ffffffff813345bc>] local_pci_probe+0x5c/0xd0 [ 240.960189] [<ffffffff81335e89>] __pci_device_probe+0xf9/0x100 [ 240.960197] [<ffffffff8130ce6a>] ? kobject_get+0x1a/0x30 [ 240.960205] [<ffffffff81335eca>] pci_device_probe+0x3a/0x60 [ 240.960212] [<ffffffff813f5278>] really_probe+0x68/0x190 [ 240.960217] [<ffffffff813f5505>] driver_probe_device+0x45/0x70 [ 240.960223] [<ffffffff813f55db>] __driver_attach+0xab/0xb0 [ 240.960227] [<ffffffff813f5530>] ? driver_probe_device+0x70/0x70 [ 240.960233] [<ffffffff813f5530>] ? driver_probe_device+0x70/0x70 [ 240.960237] [<ffffffff813f436c>] bus_for_each_dev+0x5c/0x90 [ 240.960243] [<ffffffff813f503e>] driver_attach+0x1e/0x20 [ 240.960248] [<ffffffff813f4c90>] bus_add_driver+0x1a0/0x270 [ 240.960255] [<ffffffffa001e000>] ? 0xffffffffa001dfff [ 240.960260] [<ffffffff813f5b46>] driver_register+0x76/0x140 [ 240.960266] [<ffffffffa001e000>] ? 0xffffffffa001dfff [ 240.960271] [<ffffffff81335b66>] __pci_register_driver+0x56/0xd0 [ 240.960277] [<ffffffffa001e000>] ? 0xffffffffa001dfff [ 240.960286] [<ffffffffa001e09e>] megasas_init+0x9e/0x1000 [megaraid_sas] [ 240.960294] [<ffffffff81002040>] do_one_initcall+0x40/0x180 [ 240.960301] [<ffffffff810a82fe>] sys_init_module+0xbe/0x230 [ 240.960307] [<ffffffff81661ec2>] system_call_fastpath+0x16/0x1b [ 240.960314] INFO: task scsi_scan_7:349 blocked for more than 120 seconds.

    Read the article

  • Binding a select in a client template

    - by Bertrand Le Roy
    I recently got a question on one of my client template posts asking me how to bind a select tag’s value to data in client templates. I was surprised not to find anything on the web addressing the problem, so I thought I’d write a short post about it. It really is very simple once you know where to look. You just need to bind the value property of the select tag, like this: <select sys:value="{binding color}"> If you do it from markup like here, you just need to use the sys: prefix. It just works. Here’s the full source code for my sample page: <!DOCTYPE html> <html> <head> <title>Binding a select tag</title> <script src=http://ajax.microsoft.com/ajax/beta/0911/Start.js type="text/javascript"></script> <script type="text/javascript"> Sys.require(Sys.scripts.Templates, function() { var colors = [ "red", "green", "blue", "cyan", "purple", "yellow" ]; var things = [ { what: "object", color: "blue" }, { what: "entity", color: "purple" }, { what: "thing", color: "green" } ]; Sys.create.dataView("#thingList", { data: things, itemRendered: function(view, ctx) { Sys.create.dataView( Sys.get("#colorSelect", ctx), { data: colors }); } }); }); </script> <style type="text/css"> .sys-template {display: none;} </style> </head> <body xmlns:sys="javascript:Sys"> <div> <ul id="thingList" class="sys-template"> <li> <span sys:id="thingName" sys:style-color="{binding color}" >{{what}}</span> <select sys:id="colorSelect" sys:value="{binding color}" class="sys-template"> <option sys:value="{{$dataItem}}" sys:style-background-color="{{$dataItem}}" >{{$dataItem}}</option> </select> </li> </ul> </div> </body> </html> This produces the following page: Each of the items sees its color change as you select a different color in the drop-down. Other details worth noting in this page are the use of the script loader to get the framework from the CDN, and the sys:style-background-color syntax to bind the background color style property from markup. Of course, I’ve used a fair amount of custom ASP.NET Ajax markup in here, but everything could be done imperatively and with completely clean markup from the itemRendered event using Sys.bind.

    Read the article

  • Welcome to the Oracle Retail International Blog

    - by sarah.taylor(at)oracle.com
    Welcome to the first post of the new Oracle Retail International Blog. Retail is an international business and today's successful retailers view themselves in the context of a global market. A niche fashion business in Tokyo will learn marketing strategies from the luxury brands of Milan, an independent grocer in Oslo will source the same global brands as a supermarket in Oklahoma, and every retailer in the world will measure their multi-channel operation against the international e-commerce giant Amazon.  Why? Because today's customer is a global customer with unparalleled expectations on choice, price and service. Today's consumers have access to more information on retail than ever before. Technology allows people to shop from their home, their office or from the phone in their pocket, wherever they are and at whatever time suits them. Customers are using the web to search for products and promotions. They are also using the web to develop their voice in commenting on products and services that have delighted or disappointed. In an information rich industry, this customer element creates a new world of data. The best retailers are developing eagle eyes for reading customer activity and turning it into profitable decisions. Ultimately, whether you choose to compete or shop on price, service, product innovation, excellent operations or all of the above - the international world of retail has become an inspiration for all - retailer and consumer alike.  Retail as an industry is growing and diversifying at a faster rate than ever before. Yet it is still the customer who picks the winners and the losers on the retail field. Economic circumstances transform the rules, but it is still the customer who dictates the game, the pace, the price, and the perception of the brand. Wise retailers never rest on their laurels. They are always shopping for ideas on how to improve and differentiate the offer at every touch point to meet the customer's needs better than anyone else and to gain each customer's loyalty at a time when loyalty can be cheap. With this blog, I hope that we might provide a hub for discussion around what unifies retail and how technology supports both the retailer and customer experience. Despite the competitive nature of this market, we hope that this will provide an opportunity to share experiences and lessons learnt with a view that knowledge can only help this industry to grow and develop. At Oracle we've been supporting retailers for many years. Many of us have worked within retail organisations all over the world, myself included. With this in mind, I don't feel it is too bold a statement to say that Oracle understands retail. We wouldn't be so heavily integrated in some of the biggest and most well-known names in retail if we didn't. With this blog, we intend to create a community of international retailers that can exchange ideas and experiences, debate collective challenges and drive a better understanding of this continually evolving industry. Events such as the World Retail Congress and NRF's Big Show bring enormous value to the retail industry providing platforms for discussion and learning but they happen once a year. We wanted to create a platform for discussion on a different level and that like retail, is always on. We hope not only to bring commitment to being not only the infrastructure that brings all of their systems together within a retail business, but an infrastructure that supports the industry internationally to grow and flourish through creating a platform for networking, discussion, creativity, vision and strategy. Please feel free to ask questions or comment using the comments functionality.  You might also want to visit our other Oracle Retail social media sites: Facebook - http://www.facebook.com/oracleretail YouTube - http://www.youtube.com/user/oracleretail Twitter - http://twitter.com/#!/oracleretailInsight-Driven Retailing Blog - http://blogs.oracle.com/retail/

    Read the article

  • The All New Hotmail Looks Very Impressive [Video Tour]

    - by Gopinath
    With loads of new new features being introduced into GMail every now and then, Microsoft can’t sit and relax any more. Microsoft realized this and worked hard to introduce really impressive features in upcoming version of Windows Live Hotmail that was previewed couple of days ago. Most of the new features announced in the upcoming version are focusing on the important need of email users – de-clutter the mail box and effectively manage email over load easily. Here is the list highlight of new features New Features Sweep away clutter – This is the most impressive in the set of new features. It allows you to manage email overload. If you’ve subscribed to a newsletter but decided to not to allow it into your inbox, you can activate the sweep feature to move all the messages of the newsletter in to a folder other than your inbox. This may sound similar to filters option in GMail but the workflow is very easy in Hotmail. Quickly find message – Easy to use options are provided to see mails in separate views likes mails from contacts, social networking mail, mails from e-mail subscription services, etc. Now it’s easy to prioritize email checking like how you wish to. I prefer to check mails from my contacts first, then social networking messages and then the newsletter subscriptions. Improved spam detection – The span detection rules are tightened for better spam protection and also hotmail learns from user actions to effectively catch spam No more mail box storage restrictions – With a smart decision of Microsoft, users  no longer need to worry about the storage restrictions of their mail box – large attachments of hotmail can be stored in Windows Live SkyDrive. With Hotmail, we’ve combined the simplicity of sending photos through email with the power of Windows Live SkyDrive so that you can send up to 200 photos, each up to 50 MB in size, all in a single email. You can send all your vacation photos at once without worrying about attachment limits, Excellent Integration With Office Web Apps -  View and editing of office documents attached to the emails are made very easy by integrating Office Web Apps with Hotmail. When you receive a document/presentation/spreadsheet in hotmail, you can view it, edit it, save it or even you can send the modified document to original sender – all these without leaving hotmail. Inline viewing options for Photos, Videos, Social Network Messages – You can view photos embedded in the mail as slideshows(with the help of SilverLight), YouTube  & Hulu videos can be played inline  and track shipping notifications. Threaded conversations – emails in Hotmail are grouped just like it happens in GMail Others - enhanced account protection, full-session SSL, multiple email accounts, subfolders, contact management Video Tour Of New Features Here is an impressive video tour of new Hotmail features. When are these new features coming to Hotmail? Majority of the new features announced today are rolled out in coming weeks gradually to all the users. But advanced features like Office Integration with Hotmail is expected to take couple of months for general availability. Will You Switch back to Hotmail? Will these features lure GMail/Yahoo users to switch back to Hotmail? May be not immediately but these features may hold the existing users from leaving Hotmail. I used Hotmail, in the pre GMail era and now I use  Hotmail id only to sign-in to Microsoft websites that requites Hotmail authentication. It’s been years since I composed a new email in Hotmail. Even though the new features announced by Hotmail are very impressive, I like the way how GMail rapidly brings new features at regular intervals. If Hotmail also keeps innovating with new features at regular intervals, then there are good chances for it’s old users to return home. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • The Latest News About SAP

    - by jmorourke
    Like many professionals, I get a lot of my news from Google e-mail alerts that I’ve set up to keep track of key industry trends and competitive news.  In the past few weeks, I’ve been getting a number of news alerts about SAP.  Below are a few recent examples: Warm weather cuts short US maple sugaring season – by Toby Talbot, AP MILWAUKEE – Temperatures in Wisconsin had already hit the high 60s when Gretchen Grape and her family began tapping their 850 maple trees. They had waited for the state's ceremonial tapping to kick off the maple sugaring season. It was moved up five days, but that didn't make much difference. For Grape, the typically month-long season ended nine days later. The SAP had stopped flowing in a record-setting heat wave, and the 5-quart collection bags that in a good year fill in a day were still half-empty. Instead of their usual 300 gallons of syrup, her family had about 40. Maple syrup producers across the North have had their season cut short by unusually warm weather. While those with expensive, modern vacuum systems say they've been able to suck a decent amount of sap from their trees, producers like Grape, who still rely on traditional taps and buckets, have seen their year ruined. "It's frustrating," said the 69-year-old retiree from Holcombe, Wis. "You put in the same amount of work, equipment, investment, and then all of a sudden, boom, you have no SAP." Home & Garden: Too-Early Spring Means Sugaring Woes  - by Georgeanne Davis for The Free Press Over this past weekend, forsythia and daffodils were blooming in the southern parts of the state as temperatures climbed to 85 degrees, and trees began budding out, putting an end to this year's maple syrup production even as the state celebrated Maine Maple Sunday. Maple sugaring needs cold nights and warm days to induce SAP flows. Once the trees begin budding, SAP can still flow, but the SAP is bitter and has an off taste. Many farmers and dairymen count on sugaring for extra income, so the abbreviated season is a real financial loss for them, akin to the shortened shrimping season's effect on Maine lobstermen. SAP season comes to a sugary Sunday finale – Kennebec Journal, March 26th, 2012 Rebecca Manthey stood out in the rain at the entrance of Old Fort Western keeping watch over a cast iron kettle of boiling SAP hooked to a tripod over a wood fire.  Manthey and the rest of the Old Fort Western staff -- decked out in 18th-century attire -- joined sugar houses across the state in observance of Maine Maple Sunday. The annual event is sponsored by the Department of Agriculture and the Maine Maple Producers Association.  She said the rain hadn't kept people from coming to enjoy all the events at the fort surrounding the production of Maple syrup.  "In the 18th century, you would be boiling SAP in the woods, so I would be in the woods," Manthey explained to the families who circled around her. "People spent weeks and weeks in the woods. You don't want to cook it to fast or it would burn. When it looks like the right consistency then you send it (into the kitchen) to be made into sugar." Manthey said she enjoyed portraying an 18th-century woman, even in the rain, which didn't seem to bother visitors either. There was a steady stream of families touring the fort and enjoying the maple syrup demonstrations. I hope you enjoy these updates on SAP – Happy April Fool’s Day!

    Read the article

  • Bitmask data insertions in SSDT Post-Deployment scripts

    - by jamiet
    On my current project we are using SQL Server Data Tools (SSDT) to manage our database schema and one of the tasks we need to do often is insert data into that schema once deployed; the typical method employed to do this is to leverage Post-Deployment scripts and that is exactly what we are doing. Our requirement is a little different though, our data is split up into various buckets that we need to selectively deploy on a case-by-case basis. I was going to use a SQLCMD variable for each bucket (defaulted to some value other than “Yes”) to define whether it should be deployed or not so we could use something like this in our Post-Deployment script: IF ($(DeployBucket1Flag) = 'Yes')BEGIN   :r .\Bucket1.data.sqlENDIF ($(DeployBucket2Flag) = 'Yes')BEGIN   :r .\Bucket2.data.sqlENDIF ($(DeployBucket3Flag) = 'Yes')BEGIN   :r .\Bucket3.data.sqlEND That works fine and is, I’m sure, a very common technique for doing this. It is however slightly ugly because we have to litter our deployment with various SQLCMD variables. My colleague James Rowland-Jones (whom I’m sure many of you know) suggested another technique – bitmasks. I won’t go into detail about how this works (James has already done that at Using a Bitmask - a practical example) but I’ll summarise by saying that you can deploy different combinations of the buckets simply by supplying a different numerical value for a single SQLCMD variable. Each bit of that value’s binary representation signifies whether a particular bucket should be deployed or not. This is better demonstrated using the following simple script (which can be easily leveraged inside your Post-Deployment scripts): /* $(DeployData) is a SQLCMD variable that would, if you were using this in SSDT, be declared in the SQLCMD variables section of your project file. It should contain a numerical value, defaulted to 0. In this example I have declared it using a :setvar statement. Test the affect of different values by changing the :setvar statement accordingly. Examples: :setvar DeployData 1 will deploy bucket 1 :setvar DeployData 2 will deploy bucket 2 :setvar DeployData 3   will deploy buckets 1 & 2 :setvar DeployData 6   will deploy buckets 2 & 3 :setvar DeployData 31  will deploy buckets 1, 2, 3, 4 & 5 */ :setvar DeployData 0 DECLARE  @bitmask VARBINARY(MAX) = CONVERT(VARBINARY,$(DeployData)); IF (@bitmask & 1 = 1) BEGIN     PRINT 'Bucket 1 insertions'; END IF (@bitmask & 2 = 2) BEGIN     PRINT 'Bucket 2 insertions'; END IF (@bitmask & 4 = 4) BEGIN     PRINT 'Bucket 3 insertions'; END IF (@bitmask & 8 = 8) BEGIN     PRINT 'Bucket 4 insertions'; END IF (@bitmask & 16 = 16) BEGIN     PRINT 'Bucket 5 insertions'; END An example of running this using DeployData=6 The binary representation of 6 is 110. The second and third significant bits of that binary number are set to 1 and hence buckets 2 and 3 are “activated”. Hope that makes sense and is useful to some of you! @Jamiet P.S. I used the awesome HTML Copy feature of Visual Studio’s Productivity Power Tools in order to format the T-SQL code above for this blog post.

    Read the article

  • How to detect UTF-8-based encoded strings [closed]

    - by Diego Sendra
    A customer of asked us to build him a multi-language based support VB6 scraper, for which we had the need to detect UTF-8 based encoded strings to decode it later for proper displaying in application UI. It's necessary to point out that this need arises based on VB6 limitations to natively support UTF-8 in its controls, contrary to what it happens in .NET where you can tell a control that it should expect UTF-8 encoding. VB6 natively supports ISO 8859-1 and/or Windows-1252 encodings only, for which textboxes, dropdowns, listview controls, others can't be defined to natively support/expect UTF-8 as you can do in .NET considering what we just explained; so we would see weird symbols such as é, è among others, making it a whole mess at the time of displaying. So, next function contains whole UTF-8 encoded punctuation marks and symbols from languages like Spanish, Italian, German, Portuguese, French and others, based on an excellent UTF-8 based list we got from this link - Ref. http://home.telfort.nl/~t876506/utf8tbl.html Basically, the function compares if each and one of the listed UTF-8 encoded sentences, separated by | (pipe) are found in our passed string making a substring search first. Whether it's not found, it makes an alternative ASCII value based search to get a match. Say, a string like "Societé" (Society in english) would return FALSE through calling isUTF8("Societé") while it would return TRUE when calling isUTF8("SocietÈ") since È is the UTF-8 encoded representation of é. Once you got it TRUE or FALSE, you can decode the string through DecodeUTF8() function for properly displaying it, a function we found somewhere else time ago and also included in this post. Function isUTF8(ByVal ptstr As String) Dim tUTFencoded As String Dim tUTFencodedaux Dim tUTFencodedASCII As String Dim ptstrASCII As String Dim iaux, iaux2 As Integer Dim ffound As Boolean ffound = False ptstrASCII = "" For iaux = 1 To Len(ptstr) ptstrASCII = ptstrASCII & Asc(Mid(ptstr, iaux, 1)) & "|" Next tUTFencoded = "Ä|Ã…|Ç|É|Ñ|Ö|ÃŒ|á|Ã|â|ä|ã|Ã¥|ç|é|è|ê|ë|í|ì|î|ï|ñ|ó|ò|ô|ö|õ|ú|ù|û|ü|â€|°|¢|£|§|•|¶|ß|®|©|â„¢|´|¨|â‰|Æ|Ø|∞|±|≤|≥|Â¥|µ|∂|∑|âˆ|Ï€|∫|ª|º|Ω|æ|ø|¿|¡|¬|√|Æ’|≈|∆|«|»|…|Â|À|Ã|Õ|Å’|Å“|–|—|“|â€|‘|’|÷|â—Š|ÿ|Ÿ|â„|€|‹|›|ï¬|fl|‡|·|‚|„|‰|Â|Ú|Ã|Ë|È|Ã|ÃŽ|Ã|ÃŒ|Ó|Ô||Ã’|Ú|Û|Ù|ı|ˆ|Ëœ|¯|˘|Ë™|Ëš|¸|Ë|Ë›|ˇ" & _ "Å|Å¡|¦|²|³|¹|¼|½|¾|Ã|×|Ã|Þ|ð|ý|þ" & _ "â‰|∞|≤|≥|∂|∑|âˆ|Ï€|∫|Ω|√|≈|∆|â—Š|â„|ï¬|fl||ı|˘|Ë™|Ëš|Ë|Ë›|ˇ" tUTFencodedaux = Split(tUTFencoded, "|") If UBound(tUTFencodedaux) > 0 Then iaux = 0 Do While Not ffound And Not iaux > UBound(tUTFencodedaux) If InStr(1, ptstr, tUTFencodedaux(iaux), vbTextCompare) > 0 Then ffound = True End If If Not ffound Then 'ASCII numeric search tUTFencodedASCII = "" For iaux2 = 1 To Len(tUTFencodedaux(iaux)) 'gets ASCII numeric sequence tUTFencodedASCII = tUTFencodedASCII & Asc(Mid(tUTFencodedaux(iaux), iaux2, 1)) & "|" Next 'tUTFencodedASCII = Left(tUTFencodedASCII, Len(tUTFencodedASCII) - 1) 'compares numeric sequences If InStr(1, ptstrASCII, tUTFencodedASCII) > 0 Then ffound = True End If End If iaux = iaux + 1 Loop End If isUTF8 = ffound End Function Function DecodeUTF8(s) Dim i Dim c Dim n s = s & " " i = 1 Do While i <= Len(s) c = Asc(Mid(s, i, 1)) If c And &H80 Then n = 1 Do While i + n < Len(s) If (Asc(Mid(s, i + n, 1)) And &HC0) <> &H80 Then Exit Do End If n = n + 1 Loop If n = 2 And ((c And &HE0) = &HC0) Then c = Asc(Mid(s, i + 1, 1)) + &H40 * (c And &H1) Else c = 191 End If s = Left(s, i - 1) + Chr(c) + Mid(s, i + n) End If i = i + 1 Loop DecodeUTF8 = s End Function

    Read the article

< Previous Page | 514 515 516 517 518 519 520 521 522 523 524 525  | Next Page >