Search Results

Search found 676 results on 28 pages for 'creative'.

Page 23/28 | < Previous Page | 19 20 21 22 23 24 25 26 27 28  | Next Page >

  • Why We Do What We Do. (Part 3 of 5 Part Series on JDE 5G Postponed)

    - by Kem Butller-Oracle
    By Lyle Ekdahl - Oracle JD Edwards Sr. VP General Manager  In the closing of part two of this 5 part blog series, I stated that in the next installment I would explore the expected results of the digital overdrive era and the impact it will have on our economy. While I have full intentions of writing on that topic, I am inspired today to write about something that is top of mind. It’s top of mind because it has come up several times recently conversations with my Oracle’s JD Edwards team members, with customers and our partners, plus I feel passionately about why I do what I do…. It is not what we do but why we do that thing that we do Do you know what you do? For the most part, I bet you could tell me what you do even if your work has changed over the years.  My real question is, “Do you get excited about what you do, and are you fulfilled? Does your work deliver a sense of purpose, a cause to work for, and something to believe in?”  Alright, I guess that was not a single question. So let me just ask, “Why?” Why are you here, right now? Why do you get up in the morning? Why do you go to work? Of course, I can’t answer those questions for you but I can share with you my POV.   For starters, there are several things that drive me. As many of you know by now, I have a somewhat competitive nature but it is not solely the thrill of winning that actually fuels me. Now don’t get me wrong, I do like winning occasionally. However winning is only a potential result of competing and is clearly not guaranteed. So why compete? Why compete in business, and particularly why in this Enterprise Software business?  Here’s why! I am fascinated by creative and building processes. It is about making or producing things, causing something to come into existence. With the right skill, imagination and determination, whether it’s art or invention; the result can deliver value and inspire. In both avocation and vocation I always gravitate towards the create/build processes.  I believe one of the skills necessary for the create/build process is not just the aptitude but also, and especially, the desire and attitude that drives one to gain a deeper understanding. The more I learn about our customers, the more I seek to understand what makes the successful and what difficult issues cause them to struggle. I like to look for the complex, non-commodity process problems where streamlined design and modern technology can provide an easy and simple solution. It is especially gratifying to see our customers use our software to increase their own ability to deliver value to the market. What an incredible network effect! I know many of you share this customer obsession as well as the create/build addiction focused on simple and elegant design. This is what I believe is at the root of our common culture.  Are JD Edwards customers on a whole different than other ERP solutions’ customers? I would argue that for the most part, yes, they are. They selected our software, and our software is different. Why? Because I believe that the create/build process will generally result in solutions that reflect who built it and their culture. And a culture of people focused on why they create/build will attract different customers than one that is based on what is built or how the solution is delivered. In the past I have referred to this idea as character of the customer, and it transcends industry, size and run rate. Now some would argue that JD Edwards has some customers who are characters. But that is for a different post. As I have told you before, the JD Edwards culture is unique, and its resulting economy is valuable and deserving of our best efforts. 

    Read the article

  • Do Great Work

    - by user12601034
    Have you ever attended an online conference and actually had a desire to attend all of it?? Yesterday I attended the first day of the Great Work MBA program, sponsored by Box of Crayons and hosted by Michael Bungay Stanier. The topic of the day was “Grounding Yourself,” and the day featured five speakers on five different topics. I have to admit that I started the first session with kind of a “blech” feeling that I didn’t really want to participate, but for some reason I did. So I listened to the first session, and I was hooked. I ended up listening to all of the sessions for the day, and I had some great take-aways from the sessions – my highlights included: The opposite of bravery isn’t fear, it’s settling. In essence, you need to be brave in order to accomplish anything. If you’re settling, you’re not being brave, and your accomplishments will likely be lackluster. Bravery requires confidence and permission. You need to work at being brave by taking small wins, build them up and then take slightly larger risks. Additionally, you need to “claim your own crown.” Nobody in the business world is going to give you permission to be a guru in X – you need to give yourself permission to become a guru in X and then do it. Fall in love with obstacles. Everyone is going to face some form of failure. One way to deal with this is to fall in love with solving the puzzle of obstacles. You don’t have to hit it if you can go around it. Understanding purpose brings out the best in people and the best people. As a leader, drawing in people who are passionate and highly motivated about their work creates velocity for your organization. Being clear about purpose is the first step in doing this. You must own your own story. Everything about you creates a “unique you” that is distinct from everyone else. As you take ownership of this, it becomes part of your strength. It’s not a strength if you’re running away from it. Focus on what’s right. Be aware of your tendency to interpret a situation a certain way and differentiate between helpful and unhelpful interpretations. Three questions for how to think differently: 1) Why? 2) Who says so? 3) What would happen if? These three questions can help you build alternative perspectives and options that can increase resiliency. Even though this first day was focused on “Grounding Yourself,” I see plenty of application in the corporate environment for both individuals and leaders of teams. To apply these highlights to my work environment, I would do the following: Understand the purpose – of my company, of my team and of my role on the team. If I know the purpose, I know what I need to bring to the table to make me, my team and my company successful. Declare your goals…your BEHAGS (big, hairy, audacious goals).Have the confidence to declare what you and/or your team is going to accomplish.Sure, you might have to re-state those goals down the line, but you can learn from that as well. Get creative about achieving your goals.Break down your obstacles by asking yourself what is going to stop you from achieving your goals and then, for each obstacles, ask those three questions:Why?Who says so? What would happen if? Focus on what’s right.I had a manager who asked us to write status reports every week.“Status” consisted of 1) What did I accomplish; 2) What will I accomplish next week; 3) How can my manager help me.The focus on our status report was always “what’s right”(“what’s wrong” was always a conversation at the point in time it was needed). I’m normally a skeptic of online webcasts/conferences, and I normally expect to take away maybe one or two ideas. I’m really glad, however, that I took the time to listen to all of the sessions yesterday, and I hope that my take-aways inspire you to think about how you might do great work also. --

    Read the article

  • New PeopleSoft HCM 9.1 On Demand Standard Edition provides a complete set of IT services at a low, predictable monthly cost

    - by Robbin Velayedam
    At Oracle Open World last month, Oracle announced that we are extending our On Demand offerings with the general availability of PeopleSoft On Demand Standard Edition. Standard Edition represents Oracle’s commitment to providing customers a choice of solutions, technology, and deployment options commensurate with their business needs and future growth. The Standard Edition offering complements the traditional On Demand offerings (Enterprise and Professional Editions) by focusing on a low, predictable monthly cost model that scales with the size of your business.   As part of Oracle's open cloud strategy, customers can freely move PeopleSoft licensed applications between on premise and the various  on demand options as business needs arise.    In today’s business climate, aggressive and creative business objectives demand more of IT organizations. They are expected to provide technology-based solutions to streamline business processes, enable online collaboration and multi-tasking, facilitate data mining and storage, and enhance worker productivity. As IT budgets remain tight in a recovering economy, the challenge becomes how to meet these demands with limited time and resources. One way is to eliminate the variable costs of projects so that your team can focus on the high priority functions and better predict funding and resource needs two to three years out. Variable costs and changing priorities can derail the best laid project and capacity plans. The prime culprits of variable costs in any IT organization include disaster recovery, security breaches, technical support, and changes in business growth and priorities. Customers have an immediate need for solutions that are cheaper, predictable in cost, and flexible enough for long-term growth or capacity changes. The Standard Edition deployment option fulfills that need by allowing customers to take full advantage of the rich business functionality that is inherent to PeopleSoft HCM, while delegating all application management responsibility – such as future upgrades and product updates – to Oracle technology experts, at an affordable and expected price. Standard Edition provides the advantages of the secure Oracle On Demand hosted environment, the complete set of PeopleSoft HCM configurable business processes, and timely management of regular updates and enhancements to the application functionality and underlying technology. Standard Edition has a convenient monthly fee that is scalable by number of employees, which helps align the customer’s overall cost of ownership with its size and anticipated growth and business needs. In addition to providing PeopleSoft HCM applications' world class business functionality and Oracle On Demand's embassy-grade security, Oracle’s hosted solution distinguishes itself from competitors by offering customers the ability to transition between different deployment and service models at any point in the application ownership lifecycle. As our customers’ business and economic climates change, they are free to transition their applications back to on-premise at any time. HCM On Demand Standard Edition is based on configurability options rather than customizations, requiring no additional code to develop or maintain. This keeps the cost of ownership low and time to production less than a month on average. Oracle On Demand offers the highest standard of security and performance by leveraging a state-of-the-art data center with dedicated databases, servers, and secured URL all within a private cloud. Customers will not share databases, environments, platforms, or access portals with other customers because we value how mission critical your data are to your business. Oracle’s On Demand also provides a full breadth of disaster recovery services to provide customers the peace of mind that their data are secure and that backup operations are in place to keep their businesses up and running in the case of an emergency. Currently we have over 50 PeopleSoft customers delegating us with the management of their applications through Oracle On Demand. If you are a customer interested in learning more about the PeopleSoft HCM 9.1 Standard Edition and how it can help your organization minimize your variable IT costs and free up your resources to work on other business initiatives, contact Oracle or your Account Services Representative today.

    Read the article

  • BEHIND THE SCENES AT A FLASH-MOB...

    - by OliviaOC
    Today, we interviewed Aarti, who recently organised a flash-mob for Oracle Campus, which you can see on our facebook page Hi Aarti, perhaps you could give us a quick introduction of yourself, and what you do at Oracle? I’ve been with Campus Recruitment for just over a year. I’ve been with Oracle for three years. I was keen to get into the campus role after having watched other colleagues working in campus and when the opportunity arrived I jumped at it. The journey has been fantastic thus far. I’m responsible for the GBU hiring at Oracle. Why did you record the flash-mob video - what were your goals? Flash-mobs were one thing that took off really big in India after the first one in Mumbai. It’s the hot thing in the student community at the moment. A better way to reach out and connect with students. I think that it is also a good way to demonstrate our openness and culture at Oracle – demonstrate that we are very flexible and that we have a cool culture. I knew the video could be shared on our social media pages and reach out to a wider student community What was the preparation and rehearsal for the video like? When I decided to do the video, I had to decide who I would like to do the flash-mob. The new campus hires to Oracle would be ideal for this. We were 2 teams at 2 different locations and Each team took 2-3 songs and choreographed it themselves. Every day at 5pm, each team would meet up and every other weekend the whole group met. Practicing went on for about a month like this. How was the video received by participants and by students on the University campus? The event was well received. We did it during the lunch break at the University so that there was a large presence of students around while the flash mob took place. We set up about an hour beforehand to get everything ready. The break-bell sounded and the students came out, that’s when the flash-mob started. The students were pleasantly surprised that a company was doing this. They also recognised some of the participants involved as former graduates. Since the flash-mob and the video of it that you recorded, have you had much response due to it? We have, especially in the past two weeks. We went back to the college to make some hires. The flash-mob was still fresh in their minds and they knew well who Oracle was as a result. Would you like to repeat this kind of creative initiative again with the recruitment team? Yes, absolutely! I’m over the moon with the flash-mob. My mind is working overtime now with ideas about the next things to do!

    Read the article

  • Extreme Makeover, Phone Edition: Comcasts xfinity

    Mobile Makeover For many companies the first foray into Windows Phone 7 (WP7) may be in porting their existing mobile apps. It is tempting to simply transfer existing functionality, avoiding the additional design costs. Readdressing business needs and taking advantage of the WP7 platform can reduce cost and is essential to a successful re-launch. To better understand the advantage of new development lets examine a conceptual upgrade of Comcasts existing mobile app. Before Comcast has a great mobile app that provides several key features. The ability to browse the lineup using a guide, a client for Comcast email accounts, On Demand gallery, and much more. We will leverage these and build on them using some of the incredible WP7 features.   After With the proliferation of DVRs (Digital Video Recorders) and a variety of media devices (TV, PC, Mobile) content providers are challenged to find creative ways to build their brands. Every client touch point must provide both value added services as well as opportunities for marketing and up-sale; WP7 makes it easy to focus on those opportunities. The new app is an excellent vehicle for presenting Comcasts newly rebranded TV, Voice, and Internet services. These services now fly under the banner of xfinity and have been expanded to provide the best experience for Comcast customers. The Windows Phone 7 app will increase the surface area of this service revolution.   The home menu is simplified and highlights Comcasts Triple Play: Voice, TV, and Internet. The inbox has been replaced with a messages view, and message management is handled by a WP7 hub. The hub presents emails, tweets, and IMs from Comcast and other viewers the user follows on Twitter.  The popular view orders shows based on the users viewing history and current cable package. The first show Glee is both popular and participating in a conceptual co-marketing effort, so it receives prime positioning. The second spot goes to a hit show on a premium channel, in this example HBOs The Pacific, encouraging viewers to upgrade for this premium content. The remaining spots are ordered based on viewing history and popularity. Tapping the play button moves the user to the theatre where they can watch previews or full episodes streaming from Fancast. Tapping an extra presents the user with show details as well as interactive content that may be included as part of co-marketing efforts. Co-Marketing with Dynamic Content The success of Comcasts services are tied to the success of the networks and shows it purveys, making co-marketing efforts essential. In this concept FOX is co-marketing its popular show Glee. A customized panorama is updated with the latest gleeks tweets, streaming HD episodes, and extras featuring photos and video of the cast. If WP7 apps can be dynamically extended with web hosted .xap files, including sandboxed partner experiences would enable interactive features such as the Gleek Peek, in which a viewer can select a character from a panorama to view the actors profile. This dynamic inline experience has a tailored appeal to aspiring creatives and is technically possible with Windows Phone 7.   Summary The conceptual Comcast mobile app for Windows Phone 7 highlights just a few of the incredible experiences and business opportunities that can be unlocked with this latest mobile solution. It is critical that organizations recognize and take full advantage of these new capabilities. Simply porting existing mobile applications does not leverage these powerful tools; re-examining existing applications and upgrading them to Windows Phone 7 will prove essential to the continued growth and success of your brand.Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Import/rip/convert DVD to Adobe Premiere Pro for Mac

    - by alexyu2010
    For those who want to edit their videos, Adobe Premiere Pro will inevitably a good choice, it is a professional, real time, timeline based video editing software application that supports many video editing cards and plug-ins for accelerated processing, additional file format support and video/audio effects. Although Adobe Premiere Pro is said to be for professionals, is not so complicated that a hobbyist can't excel at using it in an hour or so. General file formats supported by Adobe Premiere Pro Up to now, Adobe Creative Suite has released several versions of Adobe Premiere Pro, including Adobe Premiere 1.0, Adobe Premiere 2.0, Adobe Premiere Pro CS3, Adobe Premiere Pro CS4 and the newly published Adobe Premiere Pro CS5. Although I saw diversity in file formats they support, I did find some common file formats supported by all of them, such as AVI, MOV, MPG. Importing DVD, Adobe Premiere Pro says "NO" It is obvious to all of us that Adobe Premiere Pro will never give DVD a hug, and it isn't rare to see that many people are really confused when they want to import their DVDs to Adobe Premiere Pro for editing. What to do? Yes, you may have noticed that, there is only a way out, that is ripping your DVDs to some formats workable with Adobe Premiere Pro natively, and this is what DVD to Adobe Premiere Pro can do. Importing DVD to Adobe Premiere Pro on Mac DVD to Adobe Premiere Pro converter for Mac is the specially designed application for ripping/converting DVD movies, DVD VOB files or DVD clips to Adobe Premiere Pro compatible AVI, MOV, MPG files with either DVD ripping tool and video converting tool within the versatile DVD to Adobe Premiere Pro converter who is a powerful program for dealing with DVD and videos perfectly. Mac DVD to Adobe Premiere Pro converter can work with a wide variety of files including DVD, VOB, AVI, WMV, MPG, MOV, MP4, DV, FLV, MKV, ASF, SWF, HD video for using with other editing tools like iMovie, FCP etc, play on QuickTime, iTunes, put on portable devices like iPod, iPhone, iPad, iRiver, BlackBerry, Gphone, Mobile Phone or upload to webistes such as YouTube, MySpace. DVD to Adobe Premiere Pro converter for Mac can also help you do some basic editing. You can trim, crop your DVD movie or DVD clip, apply special effect to make it more artistic, merge several DVD clips to a single one or tweak the output parameters for video and audio separately to get a better quality rendering. Besides, to get a good common of the process the preview widnows is also available for you.

    Read the article

  • Get 5.1 surround sound from computer through a VCR config?

    - by Wedding Nails
    I'm posting to see if my idea of this setup is right and can be done. I currently have the following "equipment": a JVC VCR -quite old-, which has built in surround sound (aka it has several speaker outputs, which I believe is 5.1 and are connected to several speakers that are in every corner of the room), a computer with SPDIF optical output and a new flat screen TV (with built in HDMI). I want the computer to take advantage of the VCR's surround system (all the speakers in the room) in order to play mainly music and video always with all the speakers (5.1) and with the maximum sound quality. Currently, the computer plays sound only through the front speaker (I connect one output to the on board pc audio input) and the quality is really bad. As a side note, the computer video runs with S-video (old school), and the picture quality as you would imagine, is really bad with the new big LCD screen. My main goals are: to upgrade the picture with a new video card which would support HDMI (my tv has HDMI). to buy a SPDIF optical cable, connect one end to the VCR SPDIF input and the other end to the PC output This is theoretically what I've researched so far, and I came out with several questions: in this case, with the SPDIF cable connected, and all the configurations done in windows allowing the 5.1, will I get every content I play "converted" or played through all of my speakers? (I read this forum post). I already know that in order for this setup to play from all the speakers, the content/audio source has to be 5.1. but my question is, if there is a way to play from all of the speakers no matter what type of content I'm playing (that's why I said conversion there) I already know that HDMI cables carry digital sound. Is there a way I can only use said HDMI cord to the tv, and get sound through the VCR? (I'm not too sure about this, I would have to disable the TVs speakers and use the VCR surround as default, but I have no clue wether this can be done or not). Update: The ultimate question is, do I really have to rely on "sound virtualization" technology to get sound from all the speakers, no matter what content I play? (do I require a newer sound card, like a creative soundblaster with said technology?) Thanks!

    Read the article

  • How to Create Auto Playlists in Windows Media Player 12

    - by DigitalGeekery
    Are you getting tired of the same old playlists in Windows Media Player? Today we’ll show you how to create dynamic auto playlists based on criteria you choose in WMP 12 in Windows 7. Auto Playlists In Library view, click on Create playlist dropdown arrow and select Create auto playlist. On the New Auto Playlist window type in a name for the playlist in the text box. Now we need to choose our criteria by which to filter your playlist. Select Click here to add criteria. For our example, we will create a playlist of songs that were added to the library in the last week from the Alternative genre. So, we will first select Date Added from the dropdown list. Many criteria will have addition options to configure. In the example below you will see that we have a few options to fine tune.   We will filter all the songs added to the library in the last 7 days. We will select Is After from the first dropdown list. Then select Last 7 Days from the second dropdown list. You can add multiple criteria to further filter your playlist. If you can’t find the criteria you are looking for, select “More” at the bottom of the dropdown list.   This will pull up a filter window with all the criteria. Select a filter and then click OK when finished.   From the Genre dropdown, we will select Alternative. If you’d like to add Pictures, Videos, or TV Shows to your auto playlists you can do so by selecting them from the dropdown list under And also include. You will then be able to select criteria for your pictures, videos, or TV shows from the dropdown list.   Finally, you can also add restrictions to your music such as the number of items, duration, or total size. We will limit the duration of our playlist to one hour by selecting Limit Total Duration To… Then type in 1 hour…Click OK.   Our library is automatically filtered and a playlist is created based on the criteria we selected. When additional songs are added to the Windows Media Player library, any of new songs that fit the criteria will automatically be added to the New Songs playlist. You can also save a copy of an auto playlist as a regular playlist. Switch to Playlists view by clicking Playlists from either the top menu or the navigation bar. Select the Play tab and then click Clear list to remove any tracks from the list pane.   Right-click on the playlist you want to save, select Add to, and then Play list. The songs from your auto playlist will appear as an Unsaved list on the list pane. Click Save list. Type in a name for your playlist. Your auto playlist will continue to change as you add or remove items from your Media Player library that meet the criteria you established. The new saved playlist we just created will stay as it is currently. Editing a Auto playlist is easy. Right-click on the playlist and select Edit. Now you are ready to enjoy your playlist. Conclusion Auto playlists are great way to keep your playlists fresh in Windows Media Player 12. Users can get creative and experiment with the wide variety of criteria to customize their listening experience. If you are new to playlists in Windows Media Player, you may want to check our our previous post on how to create custom playlists in Windows Media Player 12. Are you looking to get better sound from WMP 12? Take a look at how to improve playback using enhancements in Windows Media Player 12. Similar Articles Productive Geek Tips Create Custom Playlists in Windows Media Player 12Fixing When Windows Media Player Library Won’t Let You Add FilesInstall and Use the VLC Media Player on Ubuntu LinuxMake Windows Media Player Automatically Open in Mini Player ModeMake VLC Player Look like Windows Media Player 10 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid

    Read the article

  • Azure &ndash; Part 6 &ndash; Blob Storage Service

    - by Shaun
    When migrate your application onto the Azure one of the biggest concern would be the external files. In the original way we understood and ensure which machine and folder our application (website or web service) is located in. So that we can use the MapPath or some other methods to read and write the external files for example the images, text files or the xml files, etc. But things have been changed when we deploy them on Azure. Azure is not a server, or a single machine, it’s a set of virtual server machine running under the Azure OS. And even worse, your application might be moved between thses machines. So it’s impossible to read or write the external files on Azure. In order to resolve this issue the Windows Azure provides another storage serviec – Blob, for us. Different to the table service, the blob serivce is to be used to store text and binary data rather than the structured data. It provides two types of blobs: Block Blobs and Page Blobs. Block Blobs are optimized for streaming. They are comprised of blocks, each of which is identified by a block ID and each block can be a maximum of 4 MB in size. Page Blobs are are optimized for random read/write operations and provide the ability to write to a range of bytes in a blob. They are a collection of pages. The maximum size for a page blob is 1 TB.   In the managed library the Azure SDK allows us to communicate with the blobs through these classes CloudBlobClient, CloudBlobContainer, CloudBlockBlob and the CloudPageBlob. Similar with the table service managed library, the CloudBlobClient allows us to reach the blob service by passing our storage account information and also responsible for creating the blob container is not exist. Then from the CloudBlobContainer we can save or load the block blobs and page blobs into the CloudBlockBlob and the CloudPageBlob classes.   Let’s improve our exmaple in the previous posts – add a service method allows the user to upload the logo image. In the server side I created a method name UploadLogo with 2 parameters: email and image. Then I created the storage account from the config file. I also add the validation to ensure that the email passed in is valid. 1: var storageAccount = CloudStorageAccount.FromConfigurationSetting("DataConnectionString"); 2: var accountContext = new DynamicDataContext<Account>(storageAccount); 3:  4: // validation 5: var accountNumber = accountContext.Load() 6: .Where(a => a.Email == email) 7: .ToList() 8: .Count; 9: if (accountNumber <= 0) 10: { 11: throw new ApplicationException(string.Format("Cannot find the account with the email {0}.", email)); 12: } Then there are three steps for saving the image into the blob service. First alike the table service I created the container with a unique name and create it if it’s not exist. 1: // create the blob container for account logos if not exist 2: CloudBlobClient blobStorage = storageAccount.CreateCloudBlobClient(); 3: CloudBlobContainer container = blobStorage.GetContainerReference("account-logo"); 4: container.CreateIfNotExist(); Then, since in this example I will just send the blob access URL back to the client so I need to open the read permission on that container. 1: // configure blob container for public access 2: BlobContainerPermissions permissions = container.GetPermissions(); 3: permissions.PublicAccess = BlobContainerPublicAccessType.Container; 4: container.SetPermissions(permissions); And at the end I combine the blob resource name from the input file name and Guid, and then save it to the block blob by using the UploadByteArray method. Finally I returned the URL of this blob back to the client side. 1: // save the blob into the blob service 2: string uniqueBlobName = string.Format("{0}_{1}.jpg", email, Guid.NewGuid().ToString()); 3: CloudBlockBlob blob = container.GetBlockBlobReference(uniqueBlobName); 4: blob.UploadByteArray(image); 5:  6: return blob.Uri.ToString(); Let’s update a bit on the client side application and see the result. Here I just use my simple console application to let the user input the email and the file name of the image. If it’s OK it will show the URL of the blob on the server side so that we can see it through the web browser. Then we can see the logo I’ve just uploaded through the URL here. You may notice that the blob URL was based on the container name and the blob unique name. In the document of the Azure SDK there’s a page for the rule of naming them, but I think the simple rule would be – they must be valid as an URL address. So that you cannot name the container with dot or slash as it will break the ADO.Data Service routing rule. For exmaple if you named the blob container as Account.Logo then it will throw an exception says 400 Bad Request.   Summary In this short entity I covered the simple usage of the blob service to save the images onto Azure. Since the Azure platform does not support the file system we have to migrate our code for reading/writing files to the blob service before deploy it to Azure. In order to reducing this effort Microsoft provided a new approch named Drive, which allows us read and write the NTFS files just likes what we did before. It’s built up on the blob serivce but more properly for files accessing. I will discuss more about it in the next post.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • 2010 Collaboration Summit Impressions

    - by Elena Zannoni
    It's a bit late, but there you have it anyway. April 14 to 16 I attended the Linux Foundation Collaboration Summit in SFO. I was running two tracks, one on tracing and one on tools. You can see the tracks and the slides here: http://events.linuxfoundation.org/events/collaboration-summit/slides I was pretty busy both days, Thursday with a whole day tracing track, Friday with a half day toolchain track. The sessions were well attended, the rooms were full, with people spilling in the hallways. Some new things were presented, like Kernelshark, by Steve Rostedt, a GUI (yes, believe it or not, a GUI) written in GTK. It is very nice, showing a timeline for traced kernel events, and you can zoom in and filter at will. It works on the latest kernels, and it requires some new things/fixes in GTK. I don't recall exactly what version of GTK though. Dominique Toupin from Ericsson presented something about user requirements for tracing. Mostly though about who's who in the embedded world, and eclipse. Masami and Mathieu presented an update on their work. See their slides. The interesting thing to me was of course the new version of uprobes w/o underlying utrace presented by Jim Keniston. At the end of the session we had a discussion about the future of utrace. Roland wasn't there, butTom Tromey (also from RedHat) collected the feedback. Basically we are at a standstill now that utrace has been rejected yet again. There wasn't much advise that anybody could give, except jokingly, we decided that the only way in is to make it a part of perf events. There needs to be another refactoring, but most of all, this "killer app" that would be enabled because of utrace hasn't materialized yet. We think that having a good debugging story on Linux is enough of a killer app, for instance allowing multiple tracers, and not relying on SIGCHLD etc. I think this wasn't completely clear to the kernel community. Trying to achieve debugging via a gdb stub inside the kernel interfacing to utrace and that is controlled via the gdb remote protocol also lost its appeal (thankfully, since the gdb remote protocol is archaic). Somebody would have to be creative in how to submit utrace. It doesn't have to be called utrace (it was really a random choice, for lack of a letter that was not already used in front of the word "trace"). So basically, I think the ideas behind utrace are sound, and the necessity of a new interface is acknowledged. But I believe the integration/submission process with the kernel folks has to restart from scratch, clean slate. We'll see. There are many conferences and meetings coming up in the near future where things can be discussed further. On the second day, Friday, we had the tools talks. It was interesting to observe the more "kernel" oriented people's behavior towards the gcc etc community. The first talk was by Mark Mitchell, about Gcc and its new plugin architecture. After that, Paolo talked about the new C++1x standard, which will be finalized in 2011. Many features are already implemented in the libstdc++ library and gcc and usable today. We had a few minutes (really, the half day track was quite short) where Bradley Kuhn from the Software Freedom Law Center explained the GPLv3 exception for gcc (due to the new gcc plugin architecture and the availability of the intermediate results from the compilation, which is a new thing). I will not try to explain, but basically you cannot take the result of the preprocessing and then use that in your own proprietary compiler. After, we had a talk by Ian Taylor about the new Gold linker. One good thing in that area is that they are trying to make gold the new default linker (for instance Fedora will use gold as the distro linker). However gold is very different from binutils' old linker. It doesn't use a linker script, for instance. The kernel has been linked with gold many times as an exercise (the ground work was done by Kris Van Hees), but this needs to be constantly tested/monitored because the kernel linker script is very complex, and uses esoteric features (Wenji is now monitoring that each kernel RC can be built with gold). It was positive that people are now aware of gold and the need for it to be ported to more architectures. It seems that the porting is very easy, with little arch dependent code. Finally Tom Tromey presented about gdb and the archer project. Archer is a development branch of gdb mostly done by RedHat, where they are focusing on better c++ printing, c++ expression parsing, and plugins. The archer work is merged regularly in the gdb mainline. In general it was a good conference. I did miss most of the first day, because that's when I flew in. But I caught a couple of talks. Nothing earth shattering, except for Google giving each person registered a free Android phone. Yey.

    Read the article

  • SQL SERVER – Example of Performance Tuning for Advanced Users with DB Optimizer

    - by Pinal Dave
    Performance tuning is such a subject that everyone wants to master it. In beginning everybody is at a novice level and spend lots of time learning how to master the art of performance tuning. However, as we progress further the tuning of the system keeps on getting very difficult. I have understood in my early career there should be no need of ego in the technology field. There are always better solutions and better ideas out there and we should not resist them. Instead of resisting the change and new wave I personally adopt it. Here is a similar example, as I personally progress to the master level of performance tuning, I face that it is getting harder to come up with optimal solutions. In such scenarios I rely on various tools to teach me how I can do things better. Once I learn about tools, I am often able to come up with better solutions when I face the similar situation next time. A few days ago I had received a query where the user wanted to tune it further to get the maximum out of the performance. I have re-written the similar query with the help of AdventureWorks sample database. SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID; User had similar query to above query was used in very critical report and wanted to get best out of the query. When I looked at the query – here were my initial thoughts Use only column in the select statements as much as you want in the application Let us look at the query pattern and data workload and find out the optimal index for it Before I give further solutions I was told by the user that they need all the columns from all the tables and creating index was not allowed in their system. He can only re-write queries or use hints to further tune this query. Now I was in the constraint box – I believe * was not a great idea but if they wanted all the columns, I believe we can’t do much besides using *. Additionally, if I cannot create a further index, I must come up with some creative way to write this query. I personally do not like to use hints in my application but there are cases when hints work out magically and gives optimal solutions. Finally, I decided to use Embarcadero’s DB Optimizer. It is a fantastic tool and very helpful when it is about performance tuning. I have previously explained how it works over here. First open DBOptimizer and open Tuning Job from File >> New >> Tuning Job. Once you open DBOptimizer Tuning Job follow the various steps indicates in the following diagram. Essentially we will take our original script and will paste that into Step 1: New SQL Text and right after that we will enable Step 2 for Generating Various cases, Step 3 for Detailed Analysis and Step 4 for Executing each generated case. Finally we will click on Analysis in Step 5 which will generate the report detailed analysis in the result pan. The detailed pan looks like. It generates various cases of T-SQL based on the original query. It applies various hints and available hints to the query and generate various execution plans of the query and displays them in the resultant. You can clearly notice that original query had a cost of 0.0841 and logical reads about 607 pages. Whereas various options which are just following it has different execution cost as well logical read. There are few cases where we have higher logical read and there are few cases where as we have very low logical read. If we pay attention the very next row to original query have Merge_Join_Query in description and have lowest execution cost value of 0.044 and have lowest Logical Reads of 29. This row contains the query which is the most optimal re-write of the original query. Let us double click over it. Here is the query: SELECT * FROM HumanResources.Employee e INNER JOIN HumanResources.EmployeeDepartmentHistory edh ON e.BusinessEntityID = edh.BusinessEntityID INNER JOIN HumanResources.Shift s ON edh.ShiftID = s.ShiftID OPTION (MERGE JOIN) If you notice above query have additional hint of Merge Join. With the help of this Merge Join query hint this query is now performing much better than before. The entire process takes less than 60 seconds. Please note that it the join hint Merge Join was optimal for this query but it is not necessary that the same hint will be helpful in all the queries. Additionally, if the workload or data pattern changes the query hint of merge join may be no more optimal join. In that case, we will have to redo the entire exercise once again. This is the reason I do not like to use hints in my queries and I discourage all of my users to use the same. However, if you look at this example, this is a great case where hints are optimizing the performance of the query. It is humanly not possible to test out various query hints and index options with the query to figure out which is the most optimal solution. Sometimes, we need to depend on the efficiency tools like DB Optimizer to guide us the way and select the best option from the suggestion provided. Let me know what you think of this article as well your experience with DB Optimizer. Please leave a comment. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Joins, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Unique Business Value vs. Unique IT

    - by barry.perkins
    When the age of computing started, technology was new, exciting, full of potential and had a long way to grow. Vendor architectures were proprietary, and limited in function at first, growing in capability and complexity over time. There were few if any "standards", let alone "open standards" and the concepts of "open systems", and "open architectures" were far in the future. Companies employed intelligent, talented and creative people to implement the best possible solutions for their company. At first, those solutions were "unique" to each company. As time progressed, standards emerged, companies shared knowledge, business capability supplied by technology grew, and companies continued to expand their use of technology. Taking advantage of change required companies to struggle through periodic "revolutionary" change cycles, struggling through costly changes that were fraught with risk, resulted in solutions with an increasingly shorter half-life, and frequently required altering existing business processes and retraining employees and partner businesses. The pace of technological invention and implementation grew at an ever increasing rate, making the "revolutionary" approach based upon "proprietary" or "closed" architectures or technologies no longer viable. Concurrent with the advancement of technology, the rate of change in business increased, leading us to the incredibly fast paced, highly charged, and competitive global economy that we have today, where the most successful companies are companies that are good at implementing, leveraging and exploiting change. Fast forward to today, a world where dramatic changes in business and technology happen continually, a world where "evolutionary" change is crucial. Companies can no longer afford to build "unique IT", nor can they afford regular intervals of "revolutionary" change, with the associated costs and risks. Human ingenuity was once again up to the task, turning technology into a platform supporting business through evolutionary change, by employing "open": open standards; open systems; open architectures; and open solutions. Employing "open", enables companies to implement systems based upon technology, capability and standards that will evolve over time, providing a solid platform upon which a company can drive business needs, requirements, functions, and processes down into the technology, rather than exposing technology to the business, allowing companies to focus on providing "unique business value" rather than "unique IT". The big question! Does moving from "older" technology that no longer meets the needs of today's business, to new "open" technology require yet another "revolutionary change"? A "revolutionary" change with a short half-life, camouflaging reality with great marketing? The answer is "perhaps". With the endless options available to choose from, it is entirely possible to implement a solution that may work well today, but in 5 years time will become yet another albatross for the company to bear. Some solutions may look good today, solving a budget challenge by reducing cost, or solving a specific tactical challenge, but result in highly complex environments, that may be difficult to manage and maintain and limit the future potential of your business. Put differently, some solutions might push today's challenge into the future, resulting in a more complex and expensive solution. There is no such thing as a "1 size fits all" IT solution for business. If all companies implemented business solutions based upon technology that required, or forced the same business processes across all businesses in an industry, it would be extremely difficult to show competitive advantage through "unique business value". It would be equally difficult to "evolve" to meet or exceed business needs and keep up with today's rapid pace of change. How does one ensure that they do not jump from one trap directly into another? Or to put it positively, there are solutions available today that can address these challenges and issues. How does one ensure that the buying decision of today will serve the business well for years into the future? Intelligent & Informed decisions - "buying right" In a previous blog entry, we discussed the value of linking tactical to strategic The key is driving the focus to what is best for your business, handling today's tactical issues while also aligning with a roadmap/strategy that is tightly aligned with your strategic business objectives. When considering the plethora of possible options that provide various approaches to solving today's complex business problems, it is extremely important to ensure that vendors supplying those options, focus on what is best for your business, supplying sufficient information, providing adequate answers to questions, addressing challenges, issues, concerns and objections honestly and openly, and focus on supplying solutions that are tailored for, and deliver the most business value possible for your business. Here are a few questions to consider relative to the proposed options that should help ensure that today's solution doesn't become tomorrow's problem. Do the proposed solutions: Solve the problem(s) you are trying to address? Provide a solid foundation upon which to grow/enhance your business? Provide tactical gains that align with and enable your strategic business goals/objectives? Provide an infrastructure that can be leveraged with subsequent projects? Solve problems for the business overall, the lines of business, or just IT? Simplify your current environment Provide the basis for business: Efficiency Agility Clarity governance, risk, compliance real time business visibility and trend analysis Does your IT staff have the knowledge/experience to successfully manage the proposed systems once they are deployed in production? Done well, you will be presented with options tailored to your business, that enable you to drive the "unique business value" necessary to help your business stand out from others, creating a distinct competitive advantage, delivering what your customers need, when they need it, so you can attract new customers, new business, and grow top line revenue, all at a cost that provides a strong Return on Investment/Return on Assets. The net result is growth with managed cost providing significantly improved profit margin and shareholder value.

    Read the article

  • doubleTwist is an iTunes Alternative that Supports Several Devices

    - by Mysticgeek
    There are a lot of iTunes users out there, but unfortunately you can’t use it with all of your portable devices. Today we take a look at doubleTwist, which allows you to sync your media with a multitude of portable devices and easily share it as well. Note: You can run doubleTwist on Windows or Mac, and here we take a look at the Windows version. Install & Setup doubleTwist Download and install doubleTwist using the defaults in the wizard… Installation takes several moments and you’ll see the progress while it finishes up. After installation is complete, sign up for an account if you don’t already have one. If you do have an account you can login right away. Enter in your username, email address, and password then click Sign Up.   You’ll get an confirmation email and need to activate the account before you can sign in. Once you’re all signed up, launch doubleTwist and you’ll be ready to start using it. doubleTwist Music The default music store is Amazon MP3 store which might appeal to those of you who are tired of the iTunes music store. A lot of times the music is cheaper and available at higher bit rates. You can start searching for music in the Amazon Music Store and previewing songs. To purchase anything though you will need to sign into your Amazon account.   Under Playlists it allows you to import your playlists from iTunes and Windows Media Player, which is a handy feature if you don’t want to set them up again. Of course you can play your songs through the music player on your desktop. Devices One of the coolest things about doubleTwist is that it supports a lot of different portable media devices including iPod, BlackBerry, Windows Mobile, Android, PSP, Smartphones, and much more. Unfortunately for Zune users…there isn’t any support for the Zune of Zune HD yet. Here we have a Creative Zen attached and can sync songs, pictures, and podcasts. An HTC-S620 Smartphone running Windows Mobile… Even a simple USB drive will be recognized and you can transfer your media to it as well.   Podcasts Finding your favorite audio and video podcasts is easy with the search feature. You can easily manage and subscribe to podcasts in the subscriptions section.   You can watch the video podcasts directly in doubleTwist. Sharing Media Also you can share digital media with your friends or add it to Flickr and YouTube. You can send any pictures, videos, or music in your library to other people by dragging it over. You can email users individually… Or access contacts from your Gmail and Yahoo accounts. There is a limit to how much you can send of video podcasts… only the first 10 minutes. The person you send it to will get a link in their email that points to your My Feed page on the doubleTwist site.   There they can access the media you sent…in this example it’s a video podcast but you can share any media. Other Features Under My Profile you can change your avatar and personal information.   In Preferences you can choose where media is stored, its startup actions, podcast subscriptions, and manage device syncing. Conclusion It’s still in beta stage so expect some bugs, but overall doubleTwist is a solid media player that is easy to use with a clean interface. It’s simple and doesn’t try to do too much so is fairly easy on system resources. The main annoyance is it tries to catalog all of your media out of the box. Which may be alright for some users with smaller media collections, but very irritating to advanced users with large collections. Also there is currently no support for the Zune, but according to their forums, it’s on the way. At the time of this writing it’s in public beta and can be downloaded for XP, Vista, Windows 7 (32 & 64 bit), and Mac OSX. If you’re looking for an iTunes alternative that works with several different portable devices, you might want to give DoubleTwist a try. Download DoubleTwist Public Beta See If Your Media Device is Supported by doubleTwist Similar Articles Productive Geek Tips MusicBee is a Fast and Powerful Music ManagerAvoid the Apple QuickTime Bloat with QT LiteBeginner Geek: Set Default Programs in Windows 7 and VistaBeginner Geeks: OpenOffice is a Free Cross Platform Alternative to MS OfficeManage Devices the Easy Way with Device Stage in Windows 7 TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 PCmover Professional Play Music in Chrome by Simply Dragging a File 15 Great Illustrations by Chow Hon Lam Easily Sync Files & Folders with Friends & Family Amazon Free Kindle for PC Download Stretch popurls.com with a Stylish Script (Firefox) OldTvShows.org – Find episodes of Hitchcock, Soaps, Game Shows and more

    Read the article

  • A Quick Primer on SharePoint Customization

    - by PeterBrunone
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} This one goes out to all the people who have been asked to change the way a SharePoint site looks.  Management wants to know how long it will take, and you can whip that out by tomorrow, right?  If you don't have time to prepare a treatise on what's involved, or if you just want to lend some extra weight to your case by quoting a blogger who was an MVP for seven years, then dive right in; this post is for you. There are three main components of SharePoint visual customization:   1)       Theme – A theme encompasses all the standardized text formatting and coloring (borders, fonts, etc), including the background images of various sections. All told, there could be around 50 images involved, and a few hundred CSS (style) classes.  Installing a theme once it’s been created is no great feat.  Given the number of pieces, of course, creating a new theme could take anywhere from a day to a week… once decisions have been made about the desired appearance. 2)      Master Page – A master page provides the framework for page layout.  This includes all the top and side menus, where content shows up, et cetera.  Master pages have been around for a long time in ASP.NET (Microsoft’s web development platform), and they do require some .NET programming knowledge.  Beyond that, in SharePoint, there are a few dozen controls which the system expects find on a given page.  They’re not all used at once, but if they’re not there when they’re needed, chaos ensues.  Estimating a custom master page is difficult, as it depends on the level of customization.  I’ve been on projects where I was brought in simply to fix some problems and add a few finishing touches, and it took 2-3 weeks.  Master page customization requires a large amount of testing time to make sure that the HTML, JavaScript, CSS, and control placement all work well together. 3)      Individual page layout – Each page (ideally) uses a master page for its template, but within the content areas defined by the master page, web parts can be added, removed, and configured from within the browser.  The wireframe that Brent provided could most likely be completed simply by manipulating the content on the home page in this fashion, and we had allowed about a day of effort for the task.  If needed, further functionality can be provided by an experienced ASP.NET developer; custom forms are a common example.  This of course is a bit more in-depth than simple content manipulation and could take several days per page (or more; there’s really no way to quantify this without a set of requirements).   That’s basically it.  To recap:  Fonts and coloring are done with themes, and can take anywhere from a day to a week to create (not counting creative time); required technical skills include HTML, CSS, and image manipulation.  Templated layout is done with master pages, and generally requires a developer familiar with both ASP.NET and SharePoint in particular; it can have far-reaching consequences depending on the complexity of the changes, and could add weeks or months to a project.  Page layout can be as simple as content manipulation in the web browser, taking a few hours per page, or it can involve more detail, like custom forms, and can require programming expertise and significantly more development time.

    Read the article

  • Oracle Fusion Middleware Innovation Award Winners 2012: ADF & Fusion Development

    - by Dana Singleterry
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} Oracle Fusion Middleware Innovation Awards honor customers for their cutting-edge solutions using Oracle Fusion Middleware. Winners are selected based on the uniqueness of their business case, business benefits, level of impact relative to the size of the organization, complexity and magnitude of implementation, and the originality of architecture. The awards were presented during Oracle OpenWorld 2012 and following winners are for the category of ADF & Fusion Development. Micros – an OPN Platinum partner – has been working closely with Oracle product management teams in applying industry best practices in the development of their solutions. Their current application suite for the hospitality industry was built on Oracle Forms and the Oracle database running on MS Windows. The next generation of this suite is being developed and released in modules that are now based on Oracle FMW (including ADF) 11g technologies and Oracle Database 11g all running on Oracle Linux. The primary driver was that of modernization and hence the reason Oracle ADF was selected to provide a rich UI for business processes that could be served up through traditional methods or through mobile devices globally. SOA Suite & ADF allowed for loosely-coupled services that could evolve with the needs of the business. Micros's application innovations includes the use of business application portlets that have been published from ADF Faces Task Flows generated using WebCenter portlet libraries  & Oracle Metadata Services (MDS) with multi-layered customizations using Oracle WebCenter Composer. PCS (Marfin Egnatia Bank of Greece) – PCS Wealth Management is a WM Software Solution, which captures and automates the WM business processes allowing Service Providers to allocate enough time and effort into Customer Service and Investment Strategies, under Advisory or Execution-Only Services. The Product is built upon the latest Web Technologies and ensures Best Practices covering all functional expectations, meeting local regulatory requirements and discovering successful opportunities for the WM Customers' Portfolios. The new unified Wealth Management system offers an unparalleled User Interface taking full advantage of the user friendly ADF Faces Components to a great extent, all serving Private Banking purposes. The application offers a true Account Officer Cockpit with shallow navigation, one-click access to informed decisions and a perfect customer service. ADF Grids and Pivots, the Data Visualization Components, as well as the Calendar and Map Components are cleverly used to help the user eliminate the usage of Excel, Outlook and other systems. PCS's application is unique in the way it leverages the ADF Faces data visualization components to create a truly attractive and insightful dashboard for their application. PCS Wealth Management Demo Qualcomm – Qualcomm, a $17B per year company, designs and sells semiconductor products for wireless telecommunications, mobile and computing markets. In addition, Qualcomm companies provide various hardware and software products to facilitate the design, development and deployment of phones and the applications that run on them. Qualcomm’s challenge has been to not only develop and deploy new business system functions to keep pace with customer demand, but also to provide a customer collaboration capability that is sufficiently robust, easy to use, and flexible to meet emerging and future needs. Qualcomm has taken successful steps in building and deploying the customer engagement platform Ieveraging various Oracle technologies including Fusion Middleware (ADF, SOA, OBIEE) and their proven ERP foundation of EBS and 11g databases. The new platform delivers a more unified and “seamless” business solution with a consistent, modern “look and feel” all based on standard business processes which facilitate efficient collaboration with Qualcomm and its customers. The look and feel leverages ADF in innovative ways and includes hover over navigation, custom pagination components, and skinning. Qualcomm has exposed a services layer that provides significant functionality including order-to-ship, quote-to-order, customer on-boarding and contract validation. Qualcomm's creative designs leverage Oracle's SOA Suite to integrate with Oracle EBS and desperate applications to provide a rich user interface through the use use of Oracle ADF Faces Rich Client Components providing a self-service solution to their customers.

    Read the article

  • GitHub Integration in Windows Azure Web Site

    - by Shaun
    Microsoft had just announced an update for Windows Azure Web Site (a.k.a. WAWS). There are four major features added in WAWS which are free scaling mode, GitHub integration, custom domain and multi branches. Since I ‘m working in Node.js and I would like to have my code in GitHub and deployed automatically to my Windows Azure Web Site once I sync my code, this feature is a big good news to me.   It’s very simple to establish the GitHub integration in WAWS. First we need a clean WAWS. In its dashboard page click “Set up Git publishing”. Currently WAWS doesn’t support to change the publish setting. So if you have an existing WAWS which published by TFS or local Git then you have to create a new WAWS and set the Git publishing. Then in the deployment page we can see now WAWS supports three Git publishing modes: - Push my local files to Windows Azure: In this mode we will create a new Git repository on local machine and commit, publish our code to Windows Azure through Git command or some GUI. - Deploy from my GitHub project: In this mode we will have a Git repository created on GitHub. Once we publish our code to GitHub Windows Azure will download the code and trigger a new deployment. - Deploy from my CodePlex project: Similar as the previous one but our code would be in CodePlex repository.   Now let’s back to GitHub and create a new publish repository. Currently WAWS GitHub integration only support for public repositories. The private repositories support will be available in several weeks. We can manage our repositories in GitHub website. But as a windows geek I prefer the GUI tool. So I opened the GitHub for Windows, login with my GitHub account and select the “github” category, click the “add” button to create a new repository on GitHub. You can download the GitHub for Windows here. I specified the repository name, description, local repository, do not check the “Keep this code private”. After few seconds it will create a new repository on GitHub and associate it to my local machine in that folder. We can find this new repository in GitHub website. And in GitHub for Windows we can also find the local repository by selecting the “local” category.   Next, we need to associate this repository with our WAWS. Back to windows developer portal, open the “Deploy from my GitHub project” in the deployment page and click the “Authorize Windows Azure” link. It will bring up a new windows on GitHub which let me allow the Windows Azure application can access your repositories. After we clicked “Allow”, windows azure will retrieve all my GitHub public repositories and let me select which one I want to integrate to this WAWS. I selected the one I had just created in GitHub for Windows. So that’s all. We had completed the GitHub integration configuration. Now let’s have a try. In GitHub for Windows, right click on this local repository and click “open in explorer”. Then I added a simple HTML file. 1: <html> 2: <head> 3: </head> 4: <body> 5: <h1> 6: I came from GitHub, WOW! 7: </h1> 8: </body> 9: </html> Save it and back to GitHub for Windows, commit this change and publish. This will upload our changes to GitHub, and Windows Azure will detect this update and trigger a new deployment. If we went back to azure developer portal we can find the new deployment. And our commit message will be shown as the deployment description as well. And here is the page deployed to WAWS.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Big Data – Evolution of Big Data – Day 3 of 21

    - by Pinal Dave
    In yesterday’s blog post we answered what is the Big Data. Today we will understand why and how the evolution of Big Data has happened. Though the answer is very simple, I would like to tell it in the form of a history lesson. Data in Flat File In earlier days data was stored in the flat file and there was no structure in the flat file.  If any data has to be retrieved from the flat file it was a project by itself. There was no possibility of retrieving the data efficiently and data integrity has been just a term discussed without any modeling or structure around. Database residing in the flat file had more issues than we would like to discuss in today’s world. It was more like a nightmare when there was any data processing involved in the application. Though, applications developed at that time were also not that advanced the need of the data was always there and there was always need of proper data management. Edgar F Codd and 12 Rules Edgar Frank Codd was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He presented 12 rules for the Relational Database and suddenly the chaotic world of the database seems to see discipline in the rules. Relational Database was a promising land for all the unstructured database users. Relational Database brought into the relationship between data as well improved the performance of the data retrieval. Database world had immediately seen a major transformation and every single vendors and database users suddenly started to adopt the relational database models. Relational Database Management Systems Since Edgar F Codd proposed 12 rules for the RBDMS there were many different vendors who started them to build applications and tools to support the relationship between database. This was indeed a learning curve for many of the developer who had never worked before with the modeling of the database. However, as time passed by pretty much everybody accepted the relationship of the database and started to evolve product which performs its best with the boundaries of the RDBMS concepts. This was the best era for the databases and it gave the world extreme experts as well as some of the best products. The Entity Relationship model was also evolved at the same time. In software engineering, an Entity–relationship model (ER model) is a data model for describing a database in an abstract way. Enormous Data Growth Well, everything was going fine with the RDBMS in the database world. As there were no major challenges the adoption of the RDBMS applications and tools was pretty much universal. There was a race at times to make the developer’s life much easier with the RDBMS management tools. Due to the extreme popularity and easy to use system pretty much every data was stored in the RDBMS system. New age applications were built and social media took the world by the storm. Every organizations was feeling pressure to provide the best experience for their users based the data they had with them. While this was all going on at the same time data was growing pretty much every organization and application. Data Warehousing The enormous data growth now presented a big challenge for the organizations who wanted to build intelligent systems based on the data and provide near real time superior user experience to their customers. Various organizations immediately start building data warehousing solutions where the data was stored and processed. The trend of the business intelligence becomes the need of everyday. Data was received from the transaction system and overnight was processed to build intelligent reports from it. Though this is a great solution it has its own set of challenges. The relational database model and data warehousing concepts are all built with keeping traditional relational database modeling in the mind and it still has many challenges when unstructured data was present. Interesting Challenge Every organization had expertise to manage structured data but the world had already changed to unstructured data. There was intelligence in the videos, photos, SMS, text, social media messages and various other data sources. All of these needed to now bring to a single platform and build a uniform system which does what businesses need. The way we do business has also been changed. There was a time when user only got the features what technology supported, however, now users ask for the feature and technology is built to support the same. The need of the real time intelligence from the fast paced data flow is now becoming a necessity. Large amount (Volume) of difference (Variety) of high speed data (Velocity) is the properties of the data. The traditional database system has limits to resolve the challenges this new kind of the data presents. Hence the need of the Big Data Science. We need innovation in how we handle and manage data. We need creative ways to capture data and present to users. Big Data is Reality! Tomorrow In tomorrow’s blog post we will try to answer discuss Basics of Big Data Architecture. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Html.RenderAction Failed when Validation Failed

    - by Shaun
    RenderAction method had been introduced when ASP.NET MVC 1.0 released in its MvcFuture assembly and then final announced along with the ASP.NET MVC 2.0. Similar as RenderPartial, the RenderAction can display some HTML markups which defined in a partial view in any parent views. But the RenderAction gives us the ability to populate the data from an action which may different from the action which populating the main view. For example, in Home/Index.aspx we can invoke the Html.RenderPartial(“MyPartialView”) but the data of MyPartialView must be populated by the Index action of the Home controller. If we need the MyPartialView to be shown in Product/Create.aspx we have to copy (or invoke) the relevant code from the Index action in Home controller to the Create action in the Product controller which is painful. But if we are using Html.RenderAction we can tell the ASP.NET MVC from which action/controller the data should be populated. in that way in the Home/Index.aspx and Product/Create.aspx views we just need to call Html.RenderAction(“CreateMyPartialView”, “MyPartialView”) so it will invoke the CreateMyPartialView action in MyPartialView controller regardless from which main view. But in my current project we found a bug when I implement a RenderAction method in the master page to show something that need to connect to the backend data center when the validation logic was failed on some pages. I created a sample application below.   Demo application I created an ASP.NET MVC 2 application and here I need to display the current date and time on the master page. I created an action in the Home controller named TimeSlot and stored the current date into ViewDate. This method was marked as HttpGet as it just retrieves some data instead of changing anything. 1: [HttpGet] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } Next, I created a partial view under the Shared folder to display the date and time string. 1: <%@ Control Language="C#" Inherits="System.Web.Mvc.ViewUserControl<dynamic>" %> 2:  3: <span>Now: <% 1: : ViewData["timeslot"].ToString() %></span> Then at the master page I used Html.RenderAction to display it in front of the logon link. 1: <div id="logindisplay"> 2: <% 1: Html.RenderAction("TimeSlot", "Home"); %> 3:  4: <% 1: Html.RenderPartial("LogOnUserControl"); %> 5: </div> It’s fairly simple and works well when I navigated to any pages. But when I moved to the logon page and click the LogOn button without input anything in username and password the validation failed and my website crashed with the beautiful yellow page. (I really like its color style and fonts…)   How ASP.NET MVC executes Html.RenderAction In this example all other pages were rendered successful which means the ASP.NET MVC found the TimeSolt action under the Home controller except this situation. The only different is that when I clicked the LogOn button the browser send an HttpPost request to the server. Is that the reason of this bug? I created another action in Home controller with the same action name but for HttpPost. 1: [HttpPost] 2: [ActionName("TimeSlot")] 3: public ActionResult TimeSlot(object dummy) 4: { 5: return TimeSlot(); 6: } Or, I can use the AcceptVerbsAttribute on the TimeSlot action to let it allow both HttpGet and HttpPost. 1: [AcceptVerbs("GET", "POST")] 2: public ActionResult TimeSlot() 3: { 4: ViewData["timeslot"] = DateTime.Now; 5: return View("TimeSlot"); 6: } And then repeat what I did before and this time it worked well. Why we need the action for HttpPost here as it’s just data retrieving? That is because of how ASP.NET MVC executes the RenderAction method. In the source code of ASP.NET MVC we can see when proforming the RenderAction ASP.NET MVC creates a RequestContext instance from the current RequestContext and created a ChildActionMvcHandler instance which inherits from MvcHandler class. Then the ASP.NET MVC processes the handler through the HttpContext.Server.Execute method. That means it performs the action as a stand-alone request asynchronously and flush the result into the  TextWriter which is being used to render the current page. Since when I clicked the LogOn the request was in HttpPost so when ASP.NET MVC processed the ChildActionMvcHandler it would find the action which allow the current request method, which is HttpPost. Then our TimeSlot method in HttpGet would not be matched.   Summary In this post I introduced a bug in my currently developing project regards the new Html.RenderAction method provided within ASP.NET MVC 2 when processing a HttpPost request. In ASP.NET MVC world the underlying Http information became more important than in ASP.NET WebForm world. We need to pay more attention on which kind of request it currently created and how ASP.NET MVC processes.   Hope this helps, Shaun   All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Advice on learning programming languages and math.

    - by Joris Ooms
    I feel like I'm getting stuck lately when it comes to learning about programming-related things; I thought I'd ask a question here and write it all down in the hope to get some pointers/advice from people. Perhaps writing it down helps me put things in perspective for myself aswell. I study Interactive Multimedia Design. This course is based on two things: graphic design on one hand, and web development on the other hand. I have quite a decent knowledge of web-related languages (the usual HTML/JS/PHP) and I'll be getting a course on ASP.NET next year. In my free time, I have learnt how to work with CodeIgniter, aswell as some diving into Ruby (and Rails) and basic iOS programming. In my first year of college I also did a class on Java (19/20 on the end result). This grade doesn't really mean anything though; I have the basics of OOP down but Java-wise, we learnt next to nothing. Considering the time I have been programming in, for example, PHP.. I can't say I'm bad at it. I'm definitely not good or great at it, but I'm decent. My teachers tell me I have the programming thing down. They just tell me I should keep on learning. So that's what I do, and I try to take in as much as possible; however, sometimes I'm unsure where to start and I have this tendency to always doubt myself. Now, for the 'question'. I want to get into iOS programming. I know iOS programming boils down to programming in Cocoa Touch and Objective-C. I also know Obj-C is a superset of C. I have done a class on C a couple of years ago, but I failed miserably. I got stuck at pointers and never really understood them.. Until like a month ago. I suddenly 'got' it. I have been working through a book on Objective-C for a week or so now, and I understand the basics (I'm at like.. chapter 6 or so). However, I keep running into similar problems as the ones I had when I did the C class: I suck at math. No, really. I come from a Latin-Modern Languages background in high school and I had nearly no math classes back then. I wanted to study Computer Science, but I failed there because of the miserable state of my mathematics knowledge. I can't explain why I'm suddenly talking about math here though, because it isn't directly related to programming.. yet it is. For example, the examples in the book I'm reading now are about programming a fraction-calculator. All good, I can do the programming when I get the formulas down.. but it takes me a full day or more to actually get to that point. I also find it hard to come up with ideas for myself. I made one small iOS app the other day and it's just a button / label kind of thing. When I press the button, it generates a random number. That's really all I could come up with. Can you 'learn' that? It probably comes down to creativity, but evidently, I'm not too great at being creative. Are there any sites or resources out there that provide something like a basic list of things you can program when you're just starting out? Maybe I'm focusing on too many things at once. I want to keep my HTML/CSS at a decent level, while learning PHP and CodeIgniter, while diving into Ruby on Rails and learning Objective-C and the iOS SDK at the same time. I just want to be good at something, I guess. The problem is that I can't seem to be happy with my PHP stuff. I want more, something 'harder'; that's why I decided to pick up the iOS thing. Like I said, I have the basics down of a lot of different languages. I can program something simple in Java, in C, in Objective-C as of this week.. but it ends there. Mostly because I can't come up with ideas for more complex applications, and also because I just doubt myself: 'Oh, that's too complex, I can never do that'. And then it ends there. To conclude my rant, let me basically rephrase my questions into a 'tl;dr' part. A. I want to get into iOS programming and I have basic knowledge of C/Objective-C. However, I struggle to come up with ideas of my own and implement them and I also suck at math which is something that isn't directly related to, yet often needed while programming. What can I do? B. I have an interest in a lot of different programming languages and I can't stop reading/learning. However, I don't feel like I'm good in anything. Should I perhaps focus on just one language for a year or longer, or keep taking it all in at the same time and hope I'll finally get them all down? C. Are there any resources out there that provide basic ideas of things I can program? I'm thinking about 'simple' command-line applications here to help me while studying C/Obj-C away from the whole iPhone SDK. Like I said, the examples in my book are mainly math-based (fraction calculator) and it's kinda hard. :( Thanks a lot for reading my post. I didn't plan it to be this long but oh well. Thanks in advance for any answers.

    Read the article

  • Design a T-shirt for .NET Reflector Pro

    - by Laila
    Win a .NET Reflector Pro license, a box of Red Gate goodies, and a t-shirt printed with your design! Red Gate likes t-shirts. Each of our teams has one. In fact, each individual person has one, numbered according to when they joined the company: Red Gate's 1st, 2nd, and so on right up to Red Gate's 170th, with the slogan "More than just a number". Those t-shirts are important, chiefly because they remind the people wearing them that they are important. But that isn't enough. What really makes us great are the people who choose to use our tools. So we'd like to extend our tradition of t-shirts to include you and put the design of our next shirt entirely in your hands. We'd like you to come up with a witty slogan or create an inventive or simply beautiful t-shirt design for .NET Reflector Pro, our add-in for Visual Studio, which allows you to step into decompiled assemblies whilst debugging in Visual Studio. When you're done, post your masterpiece to Twitter with the hash tag #reflectortees, and @redgate will take a look! We'll pick the best design, and the winner will get a licensed copy of .NET Reflector Pro and a box of Red Gate goodies - not to mention a copy of their t-shirt. The winning design will go into production and be worn and given out at tradeshows, conferences, and user group events across the world, proudly bearing the name of their designer. We'll also pick three runners-up who will receive licenses for .NET Reflector Pro. Red Gate goodie box Interested? If you're up for the challenge, then we've got some resources to get you started. Inside the .zip file you'll find high-quality versions of the following: T-shirt templates: don't forget to design the front and the back! Different versions of the .NET Reflector Pro logo and Red Gate logo. Colour sheets to give you an easy reference to the Red Gate colours, including hex and RGB values. You can create and send us as many designs as you like, and each of them will be considered for the prize. To submit your designs, simply tweet including the competition hash tag, #reflectortees, and a link to somewhere we can see your design: either an image hosting site such as Twitpic, Flickr or Picasa, or a personal blog. You will need to create a Twitter account (which is free), if you don't already have one. You only have three limits: The background colour of the t-shirt should be one of our brand colours (red, light/dark grey or black), though you're welcome to use other colours in the rest of the design. You need to make use of either the .NET Reflector Pro logo OR the Red Gate logo (please keep them as they are) If you include any text or slogan, stick with just one or two colors for it. Apart from that, go wild. Go and do whatever it is you do when you get creative: whether you walk barefoot on the grass with a pencil and paper, sit cross-legged on a pile of cushions with a laptop, or simply close your eyes and float through a mist of ideas, now is your chance. Make sure you enjoy it. We're looking forward to seeing your creations. Terms and conditions 1. The closing date for entries is June 11th, 2010 (4 p.m. UK time). Red Gate Software Ltd reserves the right to extend the competition deadline at its discretion. If there is a revision, the revised date will be published on this blog and the date for announcing the results will be postponed accordingly. 2. The winning designer will be notified on June 14th, 2010 through Twitter. The winner must claim his/her prize by sending us a high-resolution image of their design via email (i.e. Illustrator EPS files or appropriate format, ideally at 300dpi). If the winner does not come forward within 3 days of the announcement, they will forfeit their prize and another winner will be selected from the runners-up. The names of the winner and runners-up will be posted on this blog by June 18th.  3. Entry is completed on the designer posting a link to their entry in a tweet with the correct hash tag, #reflectortees. 4. Red Gate Software needs to hold the rights to using the winning design in order to put the t-shirt into production. We will make sure that this is fine with the winner before we do so, but if you do not want us holding the rights to your design, please do not submit your designs. We reserve the right to slightly alter or adjust any artwork we decide to use (mainly to make it easier to print), but we will make sure we contact the winner for approval first. The winner will also need to allow us the use of his/her name for purposes of promoting your design. 5. Entries must be entirely your own original work and must not breach any copyright or third party rights. Red Gate Software Ltd will not be made partially or fully liable for any non-original work submitted by you. 6. This competition is free: you do not need to buy anything or be an existing customer to enter. 7. This competition is not open to employees of Red Gate Software Ltd, their families, or any other company directly connected with the administration of this promotion.

    Read the article

  • Remote Debug Windows Azure Cloud Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2013/11/02/remote-debug-windows-azure-cloud-service.aspxOn the 22nd of October Microsoft Announced the new Windows Azure SDK 2.2. It introduced a lot of cool features but one of it shocked most, which is the remote debug support for Windows Azure Cloud Service (a.k.a. WACS).   Live Debug is Nightmare for Cloud Application When we are developing against public cloud, debug might be the most difficult task, especially after the application had been deployed. In order to minimize the debug effort, Microsoft provided local emulator for cloud service and storage once the Windows Azure platform was announced. By using local emulator developers could be able run their application on local machine with almost the same behavior as running on Windows Azure, and that could be debug easily and quickly. But when we deployed our application to Azure, we have to use log, diagnostic monitor to debug, which is very low efficient. Visual Studio 2012 introduced a new feature named "anonymous remote debug" which allows any workstation under any user could be able to attach the remote process. This is less secure comparing the authenticated remote debug but much easier and simpler to use. Now in Windows Azure SDK 2.2, we could be able to attach our application from our local machine to Windows Azure, and it's very easy.   How to Use Remote Debugger First, let's create a new Windows Azure Cloud Project in Visual Studio and selected ASP.NET Web Role. Then create an ASP.NET WebForm application. Then right click on the cloud project and select "publish". In the publish dialog we need to make sure the application will be built in debug mode, since .NET assembly cannot be debugged in release mode. I enabled Remote Desktop as I will log into the virtual machine later in this post. It's NOT necessary for remote debug. And selected "advanced settings" tab, make sure we checked "Enable Remote Debugger for all roles". In WACS, a cloud service could be able to have one or more roles and each role could be able to have one or more instances. The remote debugger will be enabled for all roles and all instances if we checked. Currently there's no way for us to specify which role(s) and which instance(s) to enable. Finally click "publish" button. In the windows azure activity window in Visual Studio we can find some information about remote debugger. To attache remote process would be easy. Open the "server explorer" window in Visual Studio and expand "cloud services" node, find the cloud service, role and instance we had just published and wanted to debug, right click on the instance and select "attach debugger". Then after a while (it's based on how fast our Internet connect to Windows Azure Data Center) the Visual Studio will be switched to debug mode. Let's add a breakpoint in the default web page's form load function and refresh the page in browser to see what's happen. We can see that the our application was stopped at the breakpoint. The call stack, watch features are all available to use. Now let's hit F5 to continue the step, then back to the browser we will find the page was rendered successfully.   What Under the Hood Remote debugger is a WACS plugin. When we checked the "enable remote debugger" in the publish dialog, Visual Studio will add two cloud configuration settings in the CSCFG file. Since they were appended when deployment, we cannot find in our project's CSCFG file. But if we opened the publish package we could find as below. At the same time, Visual Studio will generate a certificate and included into the package for remote debugger. If we went to the azure management portal we will find there will a certificate under our application which was created, uploaded by remote debugger plugin. Since I enabled Remote Desktop there will be two certificates in the screenshot below. The other one is for remote debugger. When our application was deployed, windows azure system will open related ports for remote debugger. As below you can see there are two new ports opened on my application. Finally, in our WACS virtual machine, windows azure system will copy the remote debug component based on which version of Visual Studio we are using and start. Our application then can be debugged remotely through the visual studio remote debugger. Below is the task manager on the virtual machine of my WACS application.   Summary In this post I demonstrated one of the feature introduced in Windows Azure SDK 2.2, which is Remote Debugger. It allows us to attach our application from local machine to windows azure virtual machine once it had been deployed. Remote debugger is powerful and easy to use, but it brings more security risk. And since it's only available for debug build this means the performance will be worse than release build. Hence we should only use this feature for staging test and bug fix (publish our beta version to azure staging slot), rather than for production.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Oracle Social Network Developer Challenge: Bezzotech

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. I’ve covered all the entries we had for the Oracle Social Network Developer Challenge, the winners, Dimitri and Martin, HarQen, TEAM Informatics and John Sim from Fishbowl Solutions, and today, I’m giving you bonus coverage. Friend of the ‘Lab, Bex Huff (@bex) from Bezzotech (@bezzotech), had an interesting OpenWorld. He rebounded from an allergic reaction to finish his entry, Honey Badger, only to have his other OpenWorld commitments make him unable to present his work. Still, he did a bunch of work, and I want to make sure everyone knows about the Honey Badger. If you’re wondering about the name, it’s a meme; “honey badger don’t care.” Bex tackled a common problem with social tools by adding game mechanics to create an incentive for people to keep their profiles updated. He used a Hot-or-Not style comparison app that poses expertise questions and awards a badge to the winner. Questions are based on whatever attributes the business wants to emphasize. The goal is to find the mavens in an organization, give them praise and recognition, ideally creating incentive for everyone to raise their games. In his own words: There is a real information quality problem in social networks. In last year’s keynote, Larry Elison demonstrated how to use the social network to track down resources that have the skill sets needed for specific projects. But how well would that work in real life? People usually update that information with the basic profile information, but they rarely update their profiles with latest news items, projects, customers, or skills. It’s a pain. Or, put another way, when was the last time you updated your LinkedIn profile? Enter the Honey Badger! This is a example of a comparator app that gamifies the way people keep their profiles updated, which ensures higher quality data in the social network. An administrator comes up with a series of important questions: Who is a better communicator? Who is a better Java programmer? Who is a better team player? And people would have a space in their profile to give a justification as to why they have these skills. The second part of the app is the comparator. It randomly shows two people, their names, and their justification for why they have these skills. You will click on one of them to “vote” for them, then on the next page you will see the results from the previous match, and get 2 new people to vote on. Anybody with a winning score wins a “Honey Badge” to be displayed on their profile page, which proudly states that their peers agree that this person has those skills. Once a badge is won, it will be jealously guarded. The longer your go without updating your profile, the more likely it is that you will lose your badge. This “loss aversion” is well known in psychology, and is a strong incentive for people to keep their profiles up to date. If a user sees their rank drop from 90% to 60%, they will find the time to update their justification! Unfortunately, during the hackathon we were not allowed to modify the schema to allow for additional fields such as “justification.” So this hack is limited to just the one basic question: who is the bigger Honey Badger? Here are some shots of the Honey Badger application: #gallery-1 { margin: auto; } #gallery-1 .gallery-item { float: left; margin-top: 10px; text-align: center; width: 33%; } #gallery-1 img { border: 2px solid #cfcfcf; } #gallery-1 .gallery-caption { margin-left: 0; } Thanks to Bex and everyone for participating in our challenge. Despite very little time to promote this event, we had a great turnout and creative and useful entries. The amount of work required to put together these final entries was significant, especially during a conference, and the judges and all of us involved were impressed at how much work everyone was able to do. Congrats to everyone, pat yourselves on the back. Stay tuned if you’re interested in challenges like these. We’ll likely be running similar events in the not-so-distant future.

    Read the article

  • I have a problem with a AE1200 Cisco/Linksys Wireless-N USB adapter having stopped working after I ran the update manager in Ubuntu 12.04

    - by user69670
    Here is the problem, I use a Cisco/Linksys AE1200 wireless network adapter to connect my desktop to a public wifi internet connection. I use ndiswrapper to use the windows driver and it had been working fine for me untill I ran the update manager overnight a few days ago. When I woke up it was asking for the normal computer restart to implement the changes but after rebooting the computer, the wireless adapter did not work, the status light on the adapter did not light up even though ubuntu recognizes it is there and according to ndiswrapper the drivers are loaded and the hardware is present. the grep command is being a bitch for some unknown reason today so this will be long sorry Output from "lspci": 00:00.0 Host bridge: Advanced Micro Devices [AMD] nee ATI Radeon Xpress 200 Host Bridge (rev 01) 00:01.0 PCI bridge: Advanced Micro Devices [AMD] nee ATI RS480 PCI Bridge 00:12.0 SATA controller: Advanced Micro Devices [AMD] nee ATI SB600 Non-Raid-5 SATA 00:13.0 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI0) 00:13.1 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI1) 00:13.2 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI2) 00:13.3 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI3) 00:13.4 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB (OHCI4) 00:13.5 USB controller: Advanced Micro Devices [AMD] nee ATI SB600 USB Controller (EHCI) 00:14.0 SMBus: Advanced Micro Devices [AMD] nee ATI SBx00 SMBus Controller (rev 13) 00:14.1 IDE interface: Advanced Micro Devices [AMD] nee ATI SB600 IDE 00:14.3 ISA bridge: Advanced Micro Devices [AMD] nee ATI SB600 PCI to LPC Bridge 00:14.4 PCI bridge: Advanced Micro Devices [AMD] nee ATI SBx00 PCI to PCI Bridge 01:05.0 VGA compatible controller: Advanced Micro Devices [AMD] nee ATI RC410 [Radeon Xpress 200] 02:02.0 Communication controller: Conexant Systems, Inc. HSF 56k Data/Fax Modem 02:03.0 Multimedia audio controller: Creative Labs CA0106 Soundblaster 02:05.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL-8139/8139C/8139C+ (rev 10) Output from "lsusb": Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 001 Device 009: ID 13b1:0039 Linksys AE1200 802.11bgn Wireless Adapter [Broadcom BCM43235] Bus 003 Device 002: ID 045e:0053 Microsoft Corp. Optical Mouse Bus 004 Device 002: ID 1043:8006 iCreate Technologies Corp. Flash Disk 32-256 MB Output from "ifconfig": eth0 Link encap:Ethernet HWaddr 00:19:21:b6:af:7c UP BROADCAST MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) Interrupt:20 Base address:0xb400 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:13232 errors:0 dropped:0 overruns:0 frame:0 TX packets:13232 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:1084624 (1.0 MB) TX bytes:1084624 (1.0 MB) Output from "iwconfig": lo no wireless extensions. eth0 no wireless extensions. Output from "lsmod": Module Size Used by nls_iso8859_1 12617 1 nls_cp437 12751 1 vfat 17308 1 fat 55605 1 vfat uas 17828 0 usb_storage 39646 1 nls_utf8 12493 1 udf 84366 1 crc_itu_t 12627 1 udf snd_ca0106 39279 2 snd_ac97_codec 106082 1 snd_ca0106 ac97_bus 12642 1 snd_ac97_codec snd_pcm 80845 2 snd_ca0106,snd_ac97_codec rfcomm 38139 0 snd_seq_midi 13132 0 snd_rawmidi 25424 2 snd_ca0106,snd_seq_midi bnep 17830 2 parport_pc 32114 0 bluetooth 158438 10 rfcomm,bnep ppdev 12849 0 snd_seq_midi_event 14475 1 snd_seq_midi snd_seq 51567 2 snd_seq_midi,snd_seq_midi_event snd_timer 28931 2 snd_pcm,snd_seq snd_seq_device 14172 3 snd_seq_midi,snd_rawmidi,snd_seq snd 62064 11 snd_ca0106, snd_ac97_codec,snd_pcm,snd_rawj9fe snd_ca0106,snd_ac97_codec,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device soundcore 14635 1 snd snd_page_alloc 14108 2 snd_ca0106,snd_pcm sp5100_tco 13495 0 i2c_piix4 13093 0 radeon 733693 3 ttm 65344 1 radeon drm_kms_helper 45466 1 radeon drm 197692 5 radeon,ttm,drm_kms_helper i2c_algo_bit 13199 1 radeon mac_hid 13077 0 shpchp 32325 0 ati_agp 13242 0 lp 17455 0 parport 40930 3 parport_pc,ppdev,lp usbhid 41906 0 hid 77367 1 usbhid 8139too 23283 0 8139cp 26759 0 pata_atiixp 12999 1 Output from "sudo lshw -C network": *-network description: Ethernet interface product: RTL-8139/8139C/8139C+ vendor: Realtek Semiconductor Co., Ltd. physical id: 5 bus info: pci@0000:02:05.0 logical name: eth0 version: 10 serial: 00:19:21:b6:af:7c size: 10Mbit/s capacity: 100Mbit/s width: 32 bits clock: 33MHz capabilities: pm bus_master cap_list ethernet physical tp mii 10bt 10bt-fd 100bt 10 0bt-fd autonegotiation configuration: autonegotiation=on broadcast=yes driver=8139too driverversion=0.9.28 duplex=half latency=64 link=no maxlatency=64 mingnt=32 multicast=yes port=MII speed=10Mbit/s resources: irq:20 ioport:b400(size=256) memory:ff5fdc00-ff5fdcff Output from "iwlist scan": lo Interface doesn't support scanning. eth0 Interface doesn't support scanning. Output from "lsb_release -d": Ubuntu 12.04 LTS Output from "uname -mr": 3.2.0-24-generic-pae i686 Output from "sudo /etc/init.d/networking restart": * Running /etc/init.d/networking restart is deprecated because it may not enable again some interfaces * Reconfiguring network interfaces... [ OK ]

    Read the article

  • Content Query Web Part and the Yes/No Field

    - by Bil Simser
    The Content Query Web Part (CQWP) is a pretty powerful beast. It allows you to do multiple site queries and aggregate the results. This is great for rolling up content and doing some summary type reporting. Here’s a trick to remember about Yes/No fields and using the CQWP. If you’re building a news style site and want to aggregate say all the announcements that people tag a certain way, up onto the home page this might be a solution. First we need to allow a way for users of all our sites to mark an announcement for inclusion on our Intranet Home Page. We’ll do this by just modifying the Announcement Content type and adding a Yes/No field to it. There are alternate ways of doing this like building a new Announcement type or stapling a feature to all sites to add our column but this is pretty low impact and only affects our current site collection so let’s go with it for now, okay? You can berate me in the comments about the proper way I should have done this part. Go to the Site Settings for the Site Collection and click on Site Content Types under the Galleries. This takes you to the gallery for this site and all subsites. Scroll down until you see the List Content Types and click on Announcements. Now we’re modifying the Announcement content type which affects all those announcement lists that are created by default if you’re building sites using the Team Site template (or creating a new Announcements list on any site for that matter). Click on Add from new site column under the Column list. This will allow us to create a new Yes/No field that users will see in Announcement items. This field will allow the user to flag the announcement for inclusion on the home page. Feel free to modify the fields as you see fit for your environment, this is just an example. Now that we’ve added the column to our Announcements Content type we can go into any site that has an announcement list, modify that announcement and flag it to be included on our home page. See the new Featured column? That was the result of modifying our Announcements Content Type on this site collection. Now we can move onto the dirty part, displaying it in a CQWP on the home page. And here is where the fun begins (and the head scratching should end). On our home page we want to drop a Content Query Web Part and aggregate any Announcement that’s been flagged as Featured by the users (we could also add the filter to handle Expires so we don’t show old content so go ahead and do that if you want). First add a CQWP to the page then modify the settings for the web part. In the first section, Query, we want the List Type to be set to Announcements and the Content type to be Announcement so set your options like this: Click Apply and you’ll see the results display all Announcements from any site in the site collection. I have five team sites created each with a unique announcement added to them. Now comes the filtering. We don’t want to include every announcement, only ones users flag using that Featured column we added. At first blush you might scroll down to the Additional Filters part of the Query options and set the Featured column to be equal to Yes: This seems correct doesn’t it? After all, the column is a Yes/No column and looking at an announcement in the site, it displays the field as Yes or No: However after applying the filter you get this result: (I have the announcements from Team Site 1 and Team Site 4 flagged as Featured) Huh? It’s BACKWARDS! Let’s confirm that. Go back in and change the Additional Filters section from Yes to No and hit Apply and you get this: Wait a minute? Shouldn’t I see Team Site 1 and 4 if the logic is backwards? Why am I seeing the same thing as before. What gives… For whatever reason, unknown to me, a Yes/No field (even though it displays as such) really uses 1 and 0 behind the scenes. Yeah, someone was stuck on using integer values for booleans when they wrote SharePoint (probably after a long night of white boarding ways to mess with developers heads) and came up with this. The solution is pretty simple but not very discoverable. Set the filter to include your flagged items like so: And it will filter the items marked as Featured correctly giving you this result: This kind of solution could also be extended and enhanced. Here are a few suggestions and ideas: Modify the ItemStyle.xsl file to add a new style for this aggregation which would include the first few paragraphs of the body (or perhaps add another field to the Content type called Excerpt or Summary and display that instead) Add an Image column to the Announcement Content type to include a Picture field and display it in the summary Add a Category choice field (Employee News, Current Events, Headlines, etc.) and add multiple CQWPs to the home page filtering each one on a different category I know some may find this topic old and dusty but I didn’t see a lot out there specifically on filtering the Yes/No fields and the whole 1/0 trick was a little wonky, so I figured a few pictures would help walk through overcoming yet another SharePoint weirdness. With a little work and some creative juices you can easily us the power of aggregation and the CQWP to build a news site from content on your team sites.

    Read the article

  • ODI 12c - Aggregating Data

    - by David Allan
    This posting will look at the aggregation component that was introduced in ODI 12c. For many ETL tool users this shouldn't be a big surprise, its a little different than ODI 11g but for good reason. You can use this component for composing data with relational like operations such as sum, average and so forth. Also, Oracle SQL supports special functions called Analytic SQL functions, you can use a specially configured aggregation component or the expression component for these now in ODI 12c. In database systems an aggregate transformation is a transformation where the values of multiple rows are grouped together as input on certain criteria to form a single value of more significant meaning - that's exactly the purpose of the aggregate component. In the image below you can see the aggregate component in action within a mapping, for how this and a few other examples are built look at the ODI 12c Aggregation Viewlet here - the viewlet illustrates a simple aggregation being built and then some Oracle analytic SQL such as AVG(EMP.SAL) OVER (PARTITION BY EMP.DEPTNO) built using both the aggregate component and the expression component. In 11g you used to just write the aggregate expression directly on the target, this made life easy for some cases, but it wan't a very obvious gesture plus had other drawbacks with ordering of transformations (agg before join/lookup. after set and so forth) and supporting analytic SQL for example - there are a lot of postings from creative folks working around this in 11g - anything from customizing KMs, to bypassing aggregation analysis in the ODI code generator. The aggregate component has a few interesting aspects. 1. Firstly and foremost it defines the attributes projected from it - ODI automatically will perform the grouping all you do is define the aggregation expressions for those columns aggregated. In 12c you can control this automatic grouping behavior so that you get the code you desire, so you can indicate that an attribute should not be included in the group by, that's what I did in the analytic SQL example using the aggregate component. 2. The component has a few other properties of interest; it has a HAVING clause and a manual group by clause. The HAVING clause includes a predicate used to filter rows resulting from the GROUP BY clause. Because it acts on the results of the GROUP BY clause, aggregation functions can be used in the HAVING clause predicate, in 11g the filter was overloaded and used for both having clause and filter clause, this is no longer the case. If a filter is after an aggregate, it is after the aggregate (not sometimes after, sometimes having).  3. The manual group by clause let's you use special database grouping grammar if you need to. For example Oracle has a wealth of highly specialized grouping capabilities for data warehousing such as the CUBE function. If you want to use specialized functions like that you can manually define the code here. The example below shows the use of a manual group from an example in the Oracle database data warehousing guide where the SUM aggregate function is used along with the CUBE function in the group by clause. The SQL I am trying to generate looks like the following from the data warehousing guide; SELECT channel_desc, calendar_month_desc, countries.country_iso_code,       TO_CHAR(SUM(amount_sold), '9,999,999,999') SALES$ FROM sales, customers, times, channels, countries WHERE sales.time_id=times.time_id AND sales.cust_id=customers.cust_id AND   sales.channel_id= channels.channel_id  AND customers.country_id = countries.country_id  AND channels.channel_desc IN   ('Direct Sales', 'Internet') AND times.calendar_month_desc IN   ('2000-09', '2000-10') AND countries.country_iso_code IN ('GB', 'US') GROUP BY CUBE(channel_desc, calendar_month_desc, countries.country_iso_code); I can capture the source datastores, the filters and joins using ODI's dataset (or as a traditional flow) which enables us to incrementally design the mapping and the aggregate component for the sum and group by as follows; In the above mapping you can see the joins and filters declared in ODI's dataset, allowing you to capture the relationships of the datastores required in an entity-relationship style just like ODI 11g. The mix of ODI's declarative design and the common flow design provides for a familiar design experience. The example below illustrates flow design (basic arbitrary ordering) - a table load where only the employees who have maximum commission are loaded into a target. The maximum commission is retrieved from the bonus datastore and there is a look using employees as the driving table and only those with maximum commission projected. Hopefully this has given you a taster for some of the new capabilities provided by the aggregate component in ODI 12c. In summary, the actions should be much more consistent in behavior and more easily discoverable for users, the use of the components in a flow graph also supports arbitrary designs and the tool (rather than the interface designer) takes care of the realization using ODI's knowledge modules. Interested to know if a deep dive into each component is interesting for folks. Any thoughts? 

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28  | Next Page >