Search Results

Search found 15578 results on 624 pages for 'place and route'.

Page 547/624 | < Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >

  • Too Clever for My Own Good

    - by AjarnMark
    Yesterday I caught myself being a little too clever for my own good with some ASP.NET code.  It seems that I have forgotten some of my good old classic HTML and JavaScript skills, and become too dependent on the .NET Framework and WebControls to do the work for me.  Here’s the scenario… In order to improve the User Interface and better communicate to the user when something is happening that they need to wait for, we have started to modify some of our larger (slower) pages to display messages like Processing… or Reloading… while they are cycling through a postback.  (Yes, I understand this could be improved by using AJAX / Callbacks and so on, but even then, you need to let your user know that they need to wait for that section to be re-rendered, so for the moment these pages will continue to use good ol’ Postbacks.)  It’s a very simple trick, really.  All I want to do is when some control triggers a postback, first run a little client-side JavaScript to hide the main contents of the page (such as a GridView) and display the appropriate message.  This lets the user know, “Hey, we’re doing something, don’t click another link or scroll and try to take action right now.” The first places I hooked this up were easy.  Most common cause of a postback:  Buttons.  And when you’re writing the markup or declarative code for an ASP:Button control, there is the handy OnClientClick property which is designed for just this purpose…to run client-side JavaScript before the postback occurs.  This is distinguished from the OnClick property which tells the control what Server-side code to run.  Great!  Done!  Easy! But then there are other controls like DropDownLists and CheckBoxes that we use on our pages with the AutoPostback=True setting which cause postbacks.  And these don’t have OnClientClick or OnClientSelectedIndexChanged events.  So I started getting creative, using an ASP:CustomValidator control in conjunction with setting the CausesValidation and ValidationGroup settings on these controls, which basically caused the action on the control to fire the Custom Validator, which was defined with a Client Side validation function which then did the hide content/show message code (and return a meaningless IsValid setting).  This also caused me to define a different ValidationGroup setting for my real data entry validator controls so that I could control them separately and only have them fire when I really wanted validation, and not just my show/hide trick. For a little while I was pretty proud of myself for coming up with this clever approach to get around what I considered to be a serious oversight on the DropDownList and CheckBox controls declarative syntax.  Then, in the midst of my smugness, just as I was about to commit my changes to the source code repository, it dawned on me that there is a much simpler and much more appropriate way to accomplish this.  All that I really needed to do was to put in my server-side code (I used the Page_Init section) a call to MyControl.Attributes.Add(“onClick”, “myJavaScriptFunctionName()”) for the checkboxes, and for the DropDownLists (which become select tags) use “onChange” instead of “onClick”.  This is exactly the type of thing that the Attributes collection is there for…so you can add attributes to be rendered with the control that you would have otherwise stuck right into the HTML markup if you had been doing this by hand in the first place. Ugh!  A few hours wasted on clever tricks that I ended up completely removing, but I did learn a lot more about custom validators and validation groups in the process.  And got a good reminder that all that stuff (HTML, JavaScript, and CSS) I learned back when I wrote classic ASP pages is still valuable today.  Oh, and one more thing…don’t get lulled into too much reliance on the the whiz-bang tool to do it for you.  After all, WebControls are just another layer of abstraction, and sometimes you need to dig down through the layers and get a little closer to the native language.

    Read the article

  • Conference networking for the socially awkward

    - by Melanie Townsend
    Do you approach a room full of strangers with excitement at all the new people you’re going to chat to over coffee and a muffin as you swap tales of how you convinced your manager to give you the day “off”? Or, do you find rooms full of strangers intimidating and begin by scouting out a place you can stand quietly and not be in someone’s way until the next session begins? If you’re on the train to extrovert city, that’s great, well done, move along. If, on the other hand, a room full of strangers who all seem to inexplicably know each other already is more challenge than opportunity, then making those connections with other professionals can be more difficult. So, here’s some advice, some gleaned from other things I’ve read online when trying to overcome my own discomfort in large groups (hopefully minus the infuriating condescension), others are just things I’ve found helpful over the years. Start small Smaller groups are less intimidating, and, now that you’ve taken the plunge to show up, it’s harder to remain inconspicuous. I find it’s easier to speak to new people once the option NOT to has been taken away. You’re there now, smile through the awkward and you’ll be forever grateful when the three people you’ve met and gotten to know here are also at that gigantic conference later on (ideally, introducing you to other people). Smile, or at the very least, stop scowling You probably don’t even know you’re doing it. If your resting face doesn’t come across as manically happy, tinge that with some social anxiety and you become one great ball of unapproachable. Normally, I wouldn’t suggest this as a problem that needs fixing, I have personally honed this face to use while travelling alone all the time. However, if you are indeed hoping to meet some useful people and get the most out of this conference, you may need to remind yourself to smile. Prepare some ice breakers This is going to sound stupid, like “no one does this right?” stupid, but, just, trust me a minute. It’s okay to prepare. You don’t need to write word-for-word questions to ask people and practice them in a mirror – that would be strange. I’m suggesting to just have an arsenal of questions to ask people if you get stuck, what session has been your favorite, which ones are you most looking forward to, have you heard X presenter speak before, what did you think of them? Even just thinking about these things in advance can help, and, as a bonus, while the other person is answering it gives you a moment to tamp down that panic, I mean breathe, I mean get to know them. You’re not alone (in the least creepy way possible) See that person in the corner clutching their phone with a mild deer in the headlights look?  That is potentially your new conference buddy. Starting with something along the lines of: I don’t know about you, the sessions here are great but I find the crowds a little tough to deal with. Mind if I park here for a second? is a decent opener. Just walking around and looking at exhibitors (if applicable) is fine, but it’s a little too easy to wander about and not actually speak to anyone if that’s all you’re doing. If joining a group of people talking is too much to start with, one-on-one can be easier. Have goals Are there people in particular you wanted to speak to? Did you have a personal goal of speaking to at least “x” new people? Are you trying to get a contact in a specific company because you want to work with them on something? Does the business have vague goals as well that you may or may not be judged on later? Making specific goals you can accomplish lets you know whether you’ve actually succeeded in your “networking pursuits” or what you need to work on more for next time. Everyone’s got their own coping technique. Some people are able to remind themselves that “humans are fundamentally social creatures” and somehow that helps them, others drink which is not really something I recommend for professional conferences but to each their own, and some focus on the fact that networking can play a big role in their career path. Just do what works for you, and if there’re any tricks you’ve found helpful over the years, please share em.

    Read the article

  • Spotlight on an office - Denmark

    - by jessica.ebbelaar(at)oracle.com
    Hi, my name is Michael. I work as an Intern at the Danish office in Ballerup. My job is a part-time position beside my bachelor study in International Business at Copenhagen Business School. I joined Oracle end of February last year, and what a thrilling ride it has been! Last year, when I was offered the position, there was no doubt that I wanted to go for it. Back then, I only had little idea about Oracle as a company and what kind of exciting assignments lay ahead of me. My main role is internal communications, i.e. editor of a monthly employee’s news letter; Newszone. It is an interesting task, since it requires that I am updated on the different activities that take place within the Oracle Denmark office. I try to bring interesting articles, which are relevant and interesting news to my colleagues and it allows me to interact with many different persons at the office and to learn from their experience, which give me great inspiration and ideas for the magazine. Besides being the editor of Newszone, I also make sure that other communication flow freely at the Oracle Denmark office. I do this through our LCD screen channels. I update the internal channel with the latest information and important messages for employees, and on the external channel I circulate marketing videos featuring Oracle products and customer reference stories. In addition to this, I have the responsibility acting as a content manager of the Local Communication Denmark site on MyOracle (UCM). These are more or less my usual work assignments. On top of these I take care of various ad hoc assignments such as updating the GCM database, renew newspaper subscriptions etc. The Oracle Denmark office Being part of the local employees club I also assist with arranging social events outside working hours – e.g. evenings at the theater or cinema or by attending many of the sportsactivities;such as our running club, cycling club, food club and book club. These activities have indeed helped me grow my personal network within Oracle.  The office is packed with engaging, high-paced and motivated people who manage to take time off to spend a day attending Corporate Social Responsibility initiatives, one of them being GVD (Global Volunteer Day) with approximately 40 employees attending. This proofs some of the social responsible aspects of Oracle. I was positively surprised on how the office (named O-Zone) is designed. The office is designed into three distinct zones, namely Call zone, Project and Dialogue zone and Quiet zone, having different working environments for different job roles. The other thing which I like is that you do not have your own desk, which means you get to sit next to different people every day, getting new ideas and inspiration as well as getting to know more people in the organization you work in. To sum up: If you are considering pursuing an intern or a career after graduation in Oracle, do it! You will not regret it. It has given me many relevant practical experiences beside my study, and I am sure many great experiences will await you too.   Want to know more about the current vacancies in Denmark? Check http://campus.oracle.com for all of our vacancies.

    Read the article

  • Sweet and Sour Source Control

    - by Tony Davis
    Most database developers don't use Source Control. A recent anonymous poll on SQL Server Central asked its readers "Which Version Control system do you currently use to store you database scripts?" The winner, with almost 30% of the vote was...none: "We don't use source control for database scripts". In second place with almost 28% of the vote was Microsoft's VSS. VSS? Given its reputation for being buggy, unstable and lacking most of the basic features required of a proper source control system, answering VSS is really just another way of saying "I don't use Source Control". At first glance, it's a surprising thought. You wonder how database developers can work in a team and find out what changed, when the system worked before but is now broken; to work out what happened to their changes that now seem to have vanished; to roll-back a mistake quickly so that the rest of the team have a functioning build; to find instantly whether a suspect change has been deployed to production. Unfortunately, the survey didn't ask about the scale of the database development, and correlate the two questions. If there is only one database developer within a schema, who has an automated approach to regular generation of build scripts, then the need for a formal source control system is questionable. After all, a database stores far more about its metadata than a traditional compiled application. However, what is meat for a small development is poison for a team-based development. Here, we need a form of Source Control that can reconcile simultaneous changes, store the history of changes, derive versions and builds and that can cope with forks and merges. The problem comes when one borrows a solution that was designed for conventional programming. A database is not thought of as a "file", but a vast, interdependent and intricate matrix of tables, indexes, constraints, triggers, enumerations, static data and so on, all subtly interconnected. It is an awkward fit. Subversion with its support for merges and forks, and the tolerance of different work practices, can be made to work well, if used carefully. It has a standards-based architecture that allows it to be used on all platforms such as Windows Mac, and Linux. In the words of Erland Sommerskog, developers should "just do it". What's in a database is akin to a "binary file", and the developer must work only from the file. You check out the file, edit it, and save it to disk to compile it. Dependencies are validated at this point and if you've broken anything (e.g. you renamed a column and broke all the objects that reference the column), you'll find out about it right away, and you'll be forced to fix it. Nevertheless, for many this is an alien way of working with SQL Server. Subversion is the powerhouse, not the GUI. It doesn't work seamlessly with your existing IDE, and that usually means SSMS. So the question then becomes more subtle. Would developers be less reluctant to use a fully-featured source (revision) control system for a team database development if they had a turn-key, reliable system that fitted in with their existing work-practices? I'd love to hear what you think. Cheers, Tony.

    Read the article

  • View AccuWeather Forecasts in Google Chrome

    - by Asian Angel
    Being able to keep an eye on the weather while at work or browsing the Internet is definitely helpful. If you like detailed forecasts then join us as we take a look at the Forecastfox Weather extension for Google Chrome. Getting Started As soon as the Forecastfox Weather extension has finished installing you will automatically be presented with the “Customize Forecastfox Page”. The default setting is for New York with English measurement units. Enter your location into the blank and hit “Enter” to display the listing for your city/area. If you are presented multiple options to choose from simply click on the appropriate listing. Once you have your city/area displayed you will notice that it is possible to have access to weather forecasts for multiple locations. You can easily remove any unneeded listings with the “Remove Link”. For our example we removed the New York listing. Note: Click on desired locations and measurement units to automatically set them as defaults (no save button required). Forecastfox Weather in Action You can hover your mouse over the “Toolbar Button” to see the current weather conditions. Clicking on the “Toolbar Button” opens a popup window with the current conditions, 7 day forecast, and a static satellite image. If desired you can access additional details for the current weather conditions. Clicking on “details” opens a new tab with a nice bit of information such as UV Index, Moon Phases, Cloud Ceiling, etc. Note: AccuWeather.com webpages will have some ads displayed. Perhaps you need the Hourly Forecast… Once again a new tab will be opened with the predicted hourly weather conditions for the current day. Going back to the popup window you may also select a specific day from the 7 day forecast. You will be presented with a “Day & Night” forecast for the chosen day with links to view “Additional Details & Hourly” information. Interested in the satellite image instead? You can click on either of the available links for larger images. Once the new tab is open you can choose from a variety of different satellite images. Conclusion If you have been wanting a solid weather forecast extension for your Chrome browser then Forecastfox Weather is definitely a recommended install. Links Download the Forecastfox Weather extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Add Weather Forecasts to Google ChromeView Weather Underground Forecasts in Google ChromeView the Time & Date in Chrome When Hiding Your TaskbarView Maps and Get Directions in Google ChromeGoogle Image Search Quick Fix TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 Office 2010 Product Guides Google Maps Place marks – Pizza, Guns or Strip Clubs Monitor Applications With Kiwi LocPDF is a Visual PDF Search Tool Download Free iPad Wallpapers at iPad Decor Get Your Delicious Bookmarks In Firefox’s Awesome Bar

    Read the article

  • My First Iteration Zero

    - by onefloridacoder
    I recently watched a web cast that covered the idea of planning from the concept stage to the product backlog.  It was the first content I had seen related to Iteration Zero and it made a lot of sense from a planning and engagement perspective where the customer is concerned.  It illuminated some of the problems I’ve experienced with getting a large project of the ground.  The idea behind this is to just figure out get everyone to understand what needs to be constructed and to build the initial feature set from a *very* high level.  Once that happens other parts of the high level construction start to take place.  You end up with a feature list that describes what the business wants the system to do, and what it potentially may (or may not) interact with.  Low tech tools are used to create UI mockups that can be used as a starting point for some of the key UI pieces. Toward the end of the webcast they speaker introduced something that was new to me.  He referred to it as an executable skeleton or the steel thread.  The idea with this part of the webcast was to describe walking through the different mocked layers of the application.  Not all layers and collaborators are involved at this stage since it’s Iteration Zero, and each layer is either hard-coded or completely mocked to provide a 35K foot view of how the different layers layers work together.  So imagine two actors on each side of a layer diagram and the flow goes down from the upper left side down through a a consumer, thorough a service layer and then back up the service layer to the destination/actor. I would imagine much could be discussed moving through new/planned or existing/legacy layers, or a little of both to see what’s implied by the current high-level design. One part of the web cast has the business and design team creating the product box (think of your favorite cereal or toy box) with all of the features and even pictures laid out on the outside of the box.  The notion here is that if you handed this box to someone and told them your system was inside they would have an understanding of what the system would be able to do, or the features it could provide.    One of the interesting parts of the webcast was where the speaker described that he worked with a couple of groups in the same room and each group came up with a different product box – the point is that each group had a different idea of what the system was supposed to do.  At this point of the project I thought that to be valuable considering my experience has been that historically this has taken longer than a week to realize that the business unit and design teams see the high level solution differently.  Once my box is finished I plan on moving to the next stage of solution definition which is to plan the UI for this small application using Excel, to map out the UI elements.  I’m my own customer so it feels like cheating, but taking these slow deliberate steps have already provided a few learning opportunities.    So I resist the urge to load all of my user stories into my newly installed VS2010  TFS project and try to reduce or add to, the number of user stories and/or refine the high level estimates I’ve come up with so far.

    Read the article

  • JavaOne Latin America 2012 Trip Report

    - by reza_rahman
    JavaOne Latin America 2012 was held at the Transamerica Expo Center in Sao Paulo, Brazil on December 4-6. The conference was a resounding success with a great vibe, excellent technical content and numerous world class speakers. Some notable local and international speakers included Bruno Souza, Yara Senger, Mattias Karlsson, Vinicius Senger, Heather Vancura, Tori Wieldt, Arun Gupta, Jim Weaver, Stephen Chin, Simon Ritter and Henrik Stahl. Topics covered included the JCP/JUGs, Java SE 7, HTML 5/WebSocket, CDI, Java EE 6, Java EE 7, JSF 2.2, JMS 2, JAX-RS 2, Arquillian and JavaFX. Bruno Borges and I manned the GlassFish booth at the Java Pavilion on Tuesday and Webnesday. The booth traffic was decent and not too hectic. We met a number of GlassFish adopters including perhaps one of the largest GlassFish deployments in Brazil as well as some folks migrating to Java EE from Spring. We invited them to share their stories with us. We also talked with some key members of the local Java community. Tuesday evening we had the GlassFish party at the Tribeca Pub. The party was definitely a hit and we could have used a larger venue (this was the first time we had the GlassFish party in Brazil). Along with GlassFish enthusiasts, a number of Java community leaders were there. We met some of the same folks again at the JUG leader's party on Wednesday evening. On Thursday Arun Gupta, Bruno Borges and I ran a hands-on-lab on JAX-RS, WebSocket and Server-Sent Events (SSE) titled "Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket". This is the same Java EE 7 lab run at JavaOne San Francisco. The lab provides developers a first hand glipse of how an HTML 5 powered Java EE application might look like. We had an overflow crowd for the lab (at one point we had about twenty people standing) and the lab went very well. The slides for the lab are here: Developing JAX-RS Web Applications Utilizing Server-Sent Events and WebSocket from Reza Rahman The actual contents for the lab is available here. Give me a shout if you need help getting it up and running. I gave two solo talks following the lab. The first was on JMS 2 titled "What’s New in Java Message Service 2". This was essentially the same talk given by JMS 2 specification lead Nigel Deakin at JavaOne San Francisco. I talked about the JMS 2 simplified API, JMSContext injection, delivery delays, asynchronous send, JMS resource definition in Java EE 7, standardized configuration for JMS MDBs in EJB 3.2, mandatory JCA pluggability and the like. The session went very well, there was good Q & A and someone even told me this was the best session of the conference! The slides for the talk are here: What’s New in Java Message Service 2 from Reza Rahman My last talk for the conference was on JAX-RS 2 in the keynote hall. Titled "JAX-RS 2: New and Noteworthy in the RESTful Web Services API" this was basically the same talk given by the specification leads Santiago Pericas-Geertsen and Marek Potociar at JavaOne San Francisco. I talked about the JAX-RS 2 client API, asyncronous processing, filters/interceptors, hypermedia support, server-side content negotiation and the like. The talk went very well and I got a few very kind complements afterwards. The slides for the talk are here: JAX-RS 2: New and Noteworthy in the RESTful Web Services API from Reza Rahman On a more personal note, Sao Paulo has always had a special place in my heart as the incubating city for Sepultura and Soulfy -- two of my most favorite heavy metal musical groups of all time! Consequently, the city has a perpertually alive and kicking metal scene pretty much any given day of the week. This time I got to check out a solid performance by local metal gig Republica at the legendary Manifesto Bar. I also wanted to see a Dio Tribute at the Blackmore but ran out of time and energy... Overall I enjoyed the conference/Sao Paulo and look forward to going to Brazil again next year!

    Read the article

  • OEG11gR2 integration with OES11gR2 Authorization with condition

    - by pgoutin
    Introduction This OES use-case has been defined originally by Subbu Devulapalli (http://accessmanagement.wordpress.com/).  Based on this OES museum use-case, I have developed the OEG11gR2 policy able to deal with the OES authorization with condition. From an OEG point of view, the way to deal with OES condition is to provide with the OES request some Environmental / Context Attributes.   Museum Use-Case  All painting in the museum have security sensors, an alarm goes off when a person comes too close a painting. The employee designated for maintenance needs to use their ID and disable the alarm before maintenance. You are the Security Administrator for the museum and you have been tasked with creating authorization policies to manage authorization for different paintings. Your first task is to understand how paintings are organized. Asking around, you are surprised to see that there isno formal process in place, so you need to start from scratch. the museum tracks the following attributes for each painting 1. Name of the work 2. Painter 3. Condition (good/poor) 4. Cost You compile the list of paintings  Name of Painting  Painter  Paint Condition  Cost  Mona Lisa  Leonardo da Vinci  Good  100  Magi  Leonardo da Vinci  Poor  40  Starry Night  Vincent Van Gogh  Poor  75  Still Life  Vincent Van Gogh  Good  25 Being a software geek who doesn’t (yet) understand art, you feel that price(or insurance price) of a painting is the most important criteria. So you feel that based on years-of-experience employees can be tasked with maintaining different paintings. You decide that paintings worth over 50 cost should be only handled by employees with over 20 years of experience and employees with less than 10 years of experience should not handle any painting. Lets us start with policy modeling. All paintings have a common set of attributes and actions, so it will be good to have them under a single Resource Type. Based on this resource type we will create the actual resources. So our high level model is: 1) Resource Type: Painting which has action manage and the following four attributes a) Name of the work b) Painter c) Condition (good/poor) d) Cost 2) To keep things simple lets use painting name for Resource name (in real world you will try to use some identifier which is unique, because in future we may end up with more than one painting which has the same name.) 3) Create Resources based on the previous table 4) Create an identity attribute Experience (Integer) 5) Create the following authorization policies a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience OES Authorization Configuration We do need to create 2 authorization policies with specific conditions a) Allow employees with over 20 years experience to access all paintings b) Allow employees with 10 – 20 years of experience to access painting which cost less than 50 c) Deny access to all paintings for employees with less than 10 year of experience We don’t need an explicit policy for Deny access to all paintings for employees with less than 10 year of experience, because Oracle Entitlements Server will automatically deny if there is no matching policy. OEG Policy The OEG policy looks like the following The 11g Authorization filter configuration is similar to :  The ${PAINTING_NAME} and ${USER_EXPERIENCE} variables are initialized by the "Retrieve from the HTTP header" filters for testing purpose. That's to say, under Service Explorer, we need to provide 2 attributes "Experience" & "Painting" following the OES 11g Authorization filter described above.

    Read the article

  • Lessons Building KeyRef (a .NET developer learning Rails)

    - by Liam McLennan
    Just because I like to build things, and I like to learn, I have been working on a keyboard shortcut reference site. I am using this as an opportunity to improve my ruby and rails skills. The first few days were frustrating. Perhaps the learning curve of all the fun new toys was a bit excessive. Finally tonight things have really started to come together. I still don’t understand the rails built-in testing support but I will get there. Interesting Things I Learned Tonight RubyMine IDE Tonight I switched to RubyMine instead of my usual Notepad++. I suspect RubyMine is a powerful tool if you know how to use it – but I don’t. At the moment it gives me errors about some gems not being activated. This is another one of those things that I will get to. I have also noticed that the editor functions significantly differently to the editors I am used to. For example, in visual studio and notepad++ if you place the cursor at the start of a line and press left arrow the cursor is sent to the end of the previous line. In RubyMine nothing happens. Haml Haml is my favourite view engine. For my .NET work I have been using its non-union Mexican CLR equivalent – nHaml. Multiple CSS Classes To define a div with more than one css class haml lets you chain them together with a ‘.’, such as: .span-6.search_result contents of the div go here Indent Consistency I also learnt tonight that both haml and nhaml complain if you are not consistent about indenting. As a consequence of the move from notepad++ to RubyMine my haml views ended up with some tab indenting and some space indenting. For the view to render all of the indents within a view must be consistent. Sorting Arrays I guessed that ruby would be able to sort an array alphabetically by a property of the elements so my first attempt was: Application.all.sort {|app| app.name} which does not work. You have to supply a comparer (much like .NET). The correct sort is: Application.all.sort {|a,b| a.name.downcase <=> b.name.downcase} MongoMapper Find by Id Since document databases are just fancy key-value stores it is essential to be able to easily search for a document by its id. This functionality is so intrinsic that it seems that the MongoMapper author did not bother to document it. To search by id simply pass the id to the find method: Application.find(‘4c19e8facfbfb01794000002’) Rails And CoffeeScript I am a big fan of CoffeeScript so integrating it into this application is high on my priorities. My first thought was to copy Dr Nic’s strategy. Unfortunately, I did not get past step 1. Install Node.js. I am doing my development on Windows and node is unix only. I looked around for a solution but eventually had to concede defeat… for now. Quicksearch The front page of the application I am building displays a list of applications. When the user types in the search box I want to reduce the list of applications to match their search. A quick googlebing turned up quicksearch, a jquery plugin. You simply tell quicksearch where to get its input (the search textbox) and the list of items to filter (the divs containing the names of applications) and it just works. Here is the code: $('#app_search').quicksearch('.search_result'); Summary I have had a productive evening. The app now displays a list of applications, allows them to be sorted and links through to an application page when an application is selected. Next on the list is to display the set of keyboard shortcuts for an application.

    Read the article

  • Deploying an SSL Application to Windows Azure &ndash; The Dark Secret

    - by ToStringTheory
    When working on an application that had been in production for some time, but was about to have a shopping cart added to it, the necessity for SSL certificates came up.  When ordering the certificates through the vendor, the certificate signing request (CSR) was generated through the providers (http://register.com) web interface, and within a day, we had our certificate. At first, I thought that the certification process would be the hard part…  Little did I know that my fun was just beginning… The Problem I’ll be honest, I had never really secured a site before with SSL.  This was a learning experience for me in the first place, but little did I know that I would be learning more than the simple procedure.  I understood a bit about SSL already, the mechanisms in how it works – the secure handshake, CA’s, chains, etc…  What I didn’t realize was the importance of the CSR in the whole process.  Apparently, when the CSR is created, a public key is created at the same time, as well as a private key that is stored locally on the PC that generated the request.  When the certificate comes back and you import it back into IIS (assuming you used IIS to generate the CSR), all of the information is combined together and the SSL certificate is added into your store. Since at the time the certificate had been ordered for our site, the selection to use the online interface to generate the CSR was chosen, the certificate came back to us in 5 separate files: A root certificate – (*.crt file) An intermediate certifcate – (*.crt file) Another intermediate certificate – (*.crt file) The SSL certificate for our site – (*.crt file) The private key for our certificate – (*.key file) Well, in case you don’t know much about Windows Azure and SSL certificates, the first thing you should learn is that certificates can only be uploaded to Azure if they are in a PFX package – securable by a password.  Also, in the case of our SSL certificate, you need to include the Private Key with the file.  As you can see, we didn’t have a PFX file to upload. If you don’t get the simple PFX from your hosting provider, but rather the multiple files, you will soon find out that the process has turned from something that should be simple – to one that borders on a circle of hell… Probably between the fifth and seventh somewhere… The Solution The solution is to take the files that make up the certificates chain and key, and combine them into a file that can be imported into your local computers store, as well as uploaded to Windows Azure.  I can not take the credit for this information, as I simply researched a while before finding out how to do this. Download the OpenSSL for Windows toolkit (Win32 OpenSSL v1.0.1c) Install the OpenSSL for Windows toolkit Download and move all of your certificate files to an easily accessible location (you'll be pointing to them in the command prompt, so I put them in a subdirectory of the OpenSSL installation) Open a command prompt Navigate to the folder where you installed OpenSSL Run the following command: openssl pkcs12 -export –out {outcert.pfx} –inkey {keyfile.key}      –in {sslcert.crt} –certfile {ca1.crt} –certfile (ca2.crt) From this command, you will get a file, outcert.pfx, with the sum total of your ssl certificate (sslcert.crt), private key {keyfile.key}, and as many CA/chain files as you need {ca1.crt, ca2.crt}. Taking this file, you can then import it into your own IIS in one operation, instead of importing each certificate individually.  You can also upload the PFX to Azure, and once you add the SSL certificate links to the cloud project in Visual Studio, your good to go! Conclusion When I first looked around for a solution to this problem, there were not many places online that had the information that I was looking for.  While what I ended up having to do may seem obvious, it isn’t for everyone, and I hope that this can at least help one developer out there solve the problem without hours of work!

    Read the article

  • Folders in SQL Server Data Tools

    - by jamiet
    Recently I have begun a new project in which I am using SQL Server Data Tools (SSDT) and SQL Server Integration Services (SSIS) 2012. Although I have been using SSDT & SSIS fairly extensively while SQL Server 2012 was in the beta phase I usually find that you don’t learn about the capabilities and quirks of new products until you use them on a real project, hence I am hoping I’m going to have a lot of experiences to share on my blog over the coming few weeks. In this first such blog post I want to talk about file and folder organisation in SSDT. The predecessor to SSDT is Visual Studio Database Projects. When one created a new Visual Studio Database Project a folder structure was provided with “Schema Objects” and “Scripts” in the root and a series of subfolders for each schema: Apparently a few customers were not too happy with the tool arbitrarily creating lots of folders in Solution Explorer and hence SSDT has gone in completely the opposite direction; now no folders are created and new objects will get created in the root – it is at your discretion where they get moved to: After using SSDT for a few weeks I can safely say that I preferred the older way because I never used Solution Explorer to navigate my schema objects anyway so it didn’t bother me how many folders it created. Having said that the thought of a single long list of files in Solution Explorer without any folders makes me shudder so on this project I have been manually creating folders in which to organise files and I have tried to mimic the old way as much as possible by creating two folders in the root, one for all schema objects and another for Pre/Post deployment scripts: This works fine until different developers start to build their own different subfolder structures; if you are OCD-inclined like me this is going to grate on you eventually and hence you are going to want to move stuff around so that you have consistent folder structures for each schema and (if you have multiple databases) each project. Moreover new files get created with a filename of the object name + “.sql” and often people like to have an extra identifier in the filename to indicate the object type: The overall point is this – files and folders in your solution are going to change. Some version control systems (VCSs) don’t take kindly to files being moved around or renamed because they recognise the renamed/moved file simply as a new file and when they do that you lose the revision history which, to my mind, is one of the key benefits of using a VCS in the first place. On this project we have been using Team Foundation Server (TFS) and while it pains me to say it (as I am no great fan of TFS’s version control system) it has proved invaluable when dealing with the SSDT problems that I outlined above because it is integrated right into the Visual Studio IDE. Thus the advice from this blog post is: If you are using SSDT consider using an Visual-Studio-integrated VCS that can easily handle file renames and file moves I suspect that fans of other VCSs will counter by saying that their VCS weapon of choice can handle renames/file moves quite satisfactorily and if that’s the case…great…let me know about them in the comments. This blog post is not an attempt to make people use one particular VCS, only to make people aware of this issue that might rise when using SSDT. More to come in the coming few weeks! @jamiet

    Read the article

  • Hey Retailers, Are You Ready For The Holiday Season?

    - by Jeri Kelley
    With online holiday spending reaching $35.3 billion in 2011 and American shoppers spending just under $750 on average on their holiday purchases this year, how ready is your business for the 2012 holiday season?   ?? Today’s shoppers do not take their purchases lightly.  They are more connected, interact with more resources to make decisions, diligently compare products and services, seek out the best deals, and ask for input from friends and family.   This holiday season, as consumers browse for apparel, tablets, toys, and much more, they will be bombarded with retailer communication - from emails and commercials to countless search engine results and social recommendations.  With a flurry of activity coming at consumers from every channel and competitor, your success this year will rely on communicating a consistent, personalized message no matter where your customers are shopping.  Here are a few ideas to help with your commerce strategy this holiday season: CONSISTENCY COUNTS FOR MULTICHANNEL SHOPPERS??According to a November 2011 study commissioned by Oracle, “Channel Commerce 2011: The Consumer View,” 54% of consumers in the U.S. and Canada regularly employ two or more channels before they make a purchase.  While each channel has its own unique benefit, user profile, and purpose, it’s critical that your shoppers have a consistent core experience wherever they’re looking for information or making a purchase.  Be sure consumers can consistently search and browse the same product information and receive the same promotions online, on their mobile devices, and in-store.? USE YOUR CUSTOMER’S CONTEXT TO SURFACE RELEVANT CONTENTYour Web site is likely the hub of your holiday activity.  According to a Monetate infographic, 39% of shoppers will visit your Web site directly to find out about the best holiday deals.   Use everything you know about your customers from past purchase data to browsing history to provide a relevant experience at every click, and assemble content in a context that entices shoppers to buy online, or influences an offline purchase.? TAKE ADVANTAGE OF MOBILE BEHAVIOR?Having a mobile program is no longer a choice.   Armed with smartphones and tablets, consumers now have access to more and more product information and can compare products and prices from anywhere.  In fact, approximately 52% of smartphone users will use their device to research products, redeem coupons and use apps to assist in their holiday gift purchase.  At a minimum, be sure your mobile environment has store information, consistent pricing and promotions, and simple checkout capabilities. ARM IN-STORE ASSOCIATES WITH TABLETS?According to RISNews.com, 31% of retailers plan to begin testing tablets in stores in 2012, 22% have already begun such testing and 6% had fully deployed tablets within stores.   Take advantage of this compelling sales tool to get shoppers interacting with videos, user reviews, how-to guides, side-by-side product comparisons, and specs.  Automatically trigger upsell and cross sell suggestions for store associates to recommend for each product or category, build in alerts for promotions, and allow associates to place orders and check inventory from their tablet.  ? WISDOM OF THE CROWDS IS GOOD, BUT WISDOM FROM FRIENDS IS BETTER?Shoppers who grapple with options are looking for recommendations; they’d rather get advice from friends, and they’re more likely to spend more while doing so.    In fact, according to an infographic by Mr. Youth, 66% of social media users made a purchase on Black Friday or Cyber Monday as a direct result of social media interactions with brands or family.   This holiday season, be sure you are leveraging your social channels from Facebook to Pinterest to drive consistent promotions and help your brand to become part of the conversation. So, are you ready for the holidays this year?  

    Read the article

  • From DBA to Data Analyst

    - by Denise McInerney
    Cross posted from the PASS Blog There is a lot changing in the data professional’s world these days. More data is being produced and stored. More enterprises are trying to use that data to improve their products and services and understand their customers better. More data platforms and tools seem to be crowding the market. For a traditional DBA this can be a confusing and perhaps unsettling time. It’s also a time that offers great opportunity for career growth. I speak from personal experience. We sometimes refer to the “accidental DBA”, the person who finds herself suddenly responsible for managing the database because she has some other technical skills. While it was not accidental, six months ago I was unexpectedly offered a chance to transition out of my DBA role and become a data analyst. I have since come to view this offer as a gift, though at the time I wasn’t quite sure what to do with it. Throughout my DBA career I’ve gotten support from my PASS friends and colleagues and they were the first ones I turned to for counsel about this new situation. Everyone was encouraging and I received two pieces of valuable advice: first, leverage what I already know about data and second, work to understand the business’ needs. Bringing the power of data to bear to solve business problems is really the heart of the job. The challenge is figuring out how to do that. PASS had been the source of much of my technical training as a DBA, so I naturally started there to begin my Business Intelligence education. Once again the Virtual Chapter webinars, local chapter meetings and SQL Saturdays have been invaluable. I work in a large company where we are fortunate to have some very talented data scientists and analysts. These colleagues have been generous with their time and advice. I also took a statistics class through Coursera where I got a refresher in statistics and an introduction to the R programming language. And that’s not the end of the free resources available to someone wanting to acquire new skills. There are many knowledgeable Business Intelligence and Analytics professionals who teach through their blogs. Every day I can learn something new from one of these experts. Sometimes we plan our next career move and sometimes it just happens. Either way a database professional who follows industry developments and acquires new skills will be better prepared when change comes. Take the opportunity to learn something about the changing data landscape and attend a Business Intelligence, Business Analytics or Big Data Virtual Chapter meeting. And if you are moving into this new world of data consider attending the PASS Business Analytics Conference in April where you can meet and learn from those who are already on that road. It’s been said that “the only thing constant is change.” That’s never been more true for the data professional than it is today. But if you are someone who loves data and grasps its potential you are in the right place at the right time.

    Read the article

  • Azure Mobile Services: lessons learned

    - by svdoever
    When I first started using Azure Mobile Services I thought of it as a nice way to: authenticate my users - login using Twitter, Google, Facebook, Windows Live create tables, and use the client code to create the columns in the table because that is not possible in the Azure Mobile Services UI run some Javascript code on the table crud actions (Insert, Update, Delete, Read) schedule a Javascript to run any 15 or more minutes I had no idea of the magic that was happening inside… where is the data stored? Is it a kind of big table, are relationships between tables possible? those Javascripts on the table crud actions, is that interpreted, what is that exactly? After working for some time with Azure Mobile Services I became a lot wiser: Those tables are just normal tables in an Azure SQL Server 2012 Creating the table columns through client code sucks, at least from my Javascript code, because the columns are deducted from the sent JSON data, and a datetime field is sent as string in JSON, so a string type column is created instead of a datetime column You can connect with SQL Management Studio to the Azure SQL Server, and although you can’t manage your columns through the SQL Management Studio UI, it is possible to just run SQL scripts to drop and create tables and indices When you create a table through SQL script, add the table with the same name in the Azure Mobile Services UI to hook it up and be able to access the table through the provided abstraction layer You can also go to the SQL Database through the Azure Mobile Services UI, and from there get in a web based SQL management studio where you can create columns and manage your data The table crud scripts and the scheduler scripts are full blown node.js scripts, introducing a lot of power with great performance The web based script editor is really powerful, I do most of my editing currently in the editor which has syntax highlighting and code completing. While editing the code JsHint is used for script validation. The documentation on Azure Mobile Services is… suboptimal. It is such a pity that there is no way to comment on it so the community could fill in the missing holes, like which node modules are already loaded, and which modules are available on Azure Mobile Services. Soon I was hacking away on Azure Mobile Services, creating my own database tables through script, and abusing the read script of an empty table named query to implement my own set of “services”. The latest updates to Azure Mobile Services described in the following posts added some great new features like creating web API’s, use shared code from your scripts, command line tools for managing Azure Mobile Services (upload and download scripts for example), support for node modules and git support: http://weblogs.asp.net/scottgu/archive/2013/06/14/windows-azure-major-updates-for-mobile-backend-development.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/14/custom-apis-in-azure-mobile-services.aspx http://blogs.msdn.com/b/carlosfigueira/archive/2013/06/19/custom-api-in-azure-mobile-services-client-sdks.aspx In the mean time I rewrote all my “service-like” table scripts to API scripts, which works like a breeze. Bad thing with the current state of Azure Mobile Services is that the git support is not working if you are a co-administrator of your Azure subscription, and not and administrator (as in my case). Another bad thing is that Cross Origin Request Sharing (CORS) is not supported for the API yet, so no go yet from the browser client for API’s, which is my case. See http://social.msdn.microsoft.com/Forums/windowsazure/en-US/2b79c5ea-d187-4c2b-823a-3f3e0559829d/known-limitations-for-source-control-and-custom-api-features for more on these and other limitations. In his talk at Build 2013 Josh Twist showed that there is a work-around for accessing shared script code from the table scripts as well (another limitation mentioned in the post above). I could not find that code in the Votabl2 code example from the presentation at https://github.com/joshtwist/votabl2, but we can grab it from the presentation when it comes online on Channel9. By the way: you can always express your needs and ideas at http://mobileservices.uservoice.com, that’s the place they are listening to (I hope!).

    Read the article

  • Visual Studio Extensions

    - by Scott Dorman
    Originally posted on: http://geekswithblogs.net/sdorman/archive/2013/10/18/visual-studio-extensions.aspxAs a product, Visual Studio has been around for a long time. In fact, it’s been 18 years since the first Visual Studio product was launched. In that time, there have been some major changes but perhaps the most important (or at least influential) changes for the course of the product have been in the last few years. While we can argue over what was and wasn’t an important change or what has and hasn’t changed, I want to talk about what I think is the single most important change Microsoft has made to Visual Studio. Specifically, I’m referring to the Visual Studio Gallery (first introduced in Visual Studio 2010) and the ability for third-parties to easily write extensions which can add new functionality to Visual Studio or even change existing functionality. I know Visual Studio had this ability before the Gallery existed, but it was expensive (both from a financial and development resource) perspective for a company or individual to write such an extension. The Visual Studio Gallery changed all of that. As of today, there are over 4000 items in the Gallery. Microsoft itself has over 100 items in the Gallery and more are added all of the time. Why is this such an important feature? Simply put, it allows third-parties (companies such as JetBrains, Telerik, Red Gate, Devart, and DevExpress, just to name a few) to provide enhanced developer productivity experiences directly within the product by providing new functionality or changing existing functionality. However, there is an even more important function that it serves. It also allows Microsoft to do the same. By providing extensions which add new functionality or change existing functionality, Microsoft is not only able to rapidly innovate on new features and changes but to also get those changes into the hands of developers world-wide for feedback. The end result is that these extensions become very robust and often end up becoming part of a later product release. An excellent example of this is the new CodeLens feature of Visual Studio 2013. This is, perhaps, the single most important developer productivity enhancement released in the last decade and already has huge potential. As you can see, out of the box CodeLens supports showing you information about references, unit tests and TFS history.   Fortunately, CodeLens is also accessible to Visual Studio extensions, and Microsoft DevLabs has already written such an extension to show code “health.” This extension shows different code metrics to help make sure your code is maintainable. At this point, you may have already asked yourself, “With over 4000 extensions, how do I find ones that are good?” That’s a really good question. Fortunately, the Visual Studio Gallery has a ratings system in place, which definitely helps but that’s still a lot of extensions to look through. To that end, here is my personal list of favorite extensions. This is something I started back when Visual Studio 2010 was first released, but so much has changed since then that I thought it would be good to provide an updated list for Visual Studio 2013. These are extensions that I have installed and use on a regular basis as a developer that I find indispensible. This list is in no particular order. NuGet Package Manager for Visual Studio 2013 Microsoft CodeLens Code Health Indicator Visual Studio Spell Checker Indent Guides Web Essentials 2013 VSCommands for Visual Studio 2013 Productivity Power Tools (right now this is only for Visual Studio 2012, but it should be updated to support Visual Studio 2013.) Everyone has their own set of favorites, so mine is probably not going to match yours. If there is an extension that you really like, feel free to leave me a comment!

    Read the article

  • Sneak Peak: Social Developer Program at JavaOne

    - by Mike Stiles
    By guest blogger Roland Smart We're just days away from what is gunning to be the most exciting installment of OpenWorld to date, so how about an exciting sneak peak at the very first Social Developer Program? If your first thought is, "What's a social developer?" you're not alone. It’s an emerging term and one we think will gain prominence as social experiences become more prevalent in enterprise applications. For those who keep an eye on the ever-evolving Facebook platform, you'll recall that they recently rebranded their PDC (preferred developer consultant) group as the PMD (preferred marketing developer), signaling the importance of development resources inside the marketing organization to unlock the potential of social. The marketing developer they're referring to could be considered a social developer in a broader context. While it's true social has really blossomed in the marketing context and CMOs are winning more and more technical resources, social is starting to work its way more deeply into the enterprise with the help of developers that work outside marketing. Developers, like the rest of us, have fallen in "like" with social functionality and are starting to imagine how social can transform enterprise applications in the way it has consumer-facing experiences. The thesis of my presentation is that social developers will take many pages from the marketing playbook as they apply social inside the enterprise. To support this argument, lets walk through a range of enterprise applications and explore how consumer-facing social experiences might be interpreted in this context. Here's one example of how a social experience could be integrated into a sales enablement application. As a marketer, I spend a great deal of time collaborating with my sales colleagues, so I have good insight into their working process. While at Involver, we grew our sales team quickly, and it became evident some of our processes broke with scale. For example, we used to have weekly team meetings at which we'd discuss what was working and what wasn't from a messaging perspective. One aspect of these sessions focused on "objections" and "responses," where the salespeople would walk through common objections to purchasing and share appropriate responses. We tried to map each context to best answers and we'd capture these on a wiki page. As our team grew, however, participation at scale just wasn't tenable, and our wiki pages quickly lost their freshness. Imagine giving salespeople a place where they could submit common objections and responses for their colleagues to see, sort, comment on, and vote on. What you'd get is an up-to-date and relevant repository of information. And, if you supported an application like this with a social graph, it would be possible to make good recommendations to individual sales people about the objections they'd likely hear based on vertical, product, region or other graph data. Taking it even further, you could build in a badging/game element to reward those salespeople who participate the most. Both these examples are based on proven models at work inside consumer-facing applications. If you want to learn about how HR, Operations, Product Development and Customer Support can leverage social experiences, you’re welcome to join us at JavaOne or join our Social Developer Community to find some of the presentations after OpenWorld.

    Read the article

  • How to prepare for a programming competition? Graphs, Stacks, Trees, oh my! [closed]

    - by Simucal
    Last semester I attended ACM's (Association for Computing Machinery) bi-annual programming competition at a local University. My University sent 2 teams of 3 people and we competed amongst other schools in the mid-west. We got our butts kicked. You are given a packet with about 11 problems (1 problem per page) and you have 4 hours to solve as many as you can. They'll run your program you submit against a set of data and your output must match theirs exactly. In fact, the judging is automated for the most part. In any case.. I went there fairly confident in my programming skills and I left there feeling drained and weak. It was a terribly humbling experience. In 4 hours my team of 3 people completed only one of the problems. The top team completed 4 of them and took 1st place. The problems they asked were like no problems I have ever had to answer before. I later learned that in order to solve them some of them effectively you have to use graphs/graph algorithms, trees, stacks. Some of them were simply "greedy" algo's. My question is, how can I better prepare for this semesters programming competition so I don't leave there feeling like a complete moron? What tips do you have for me to be able to answer these problems that involve graphs, trees, various "well known" algorithms? How can I easily identify the algorithm we should implement for a given problem? I have yet to take Algorithm Design in school so I just feel a little out of my element. Here are some examples of the questions asked at the competitions: ACM Problem Sets Update: Just wanted to update this since the latest competition is over. My team placed 1st for our small region (about 6-7 universities with between 1-5 teams each school) and ~15th for the midwest! So, it is a marked improvement over last years performance for sure. We also had no graduate students on our team and after reviewing the rules we found out that many teams had several! So, that would be a pretty big advantage in my own opinion. Problems this semester ranged from about 1-2 "easy" problems (ie bit manipulation, string manipulation) to hard (graph problems involving fairly complex math and network flow problems). We were able to solve 4 problems in our 5 hours. Just wanted to thank everyone for the resources they provided here, we used them for our weekly team practices and it definitely helped! Some quick tips that I have that aren't suggested below: When you are seated at your computer before the competition starts, quickly type out various data structures that you might need that you won't have access to in your languages libraries. I typed out a Graph data-structure complete with floyd-warshall and dijkstra's algorithm before the competition began. We ended up using it in our 2nd problem that we solved and this is the main reason why we solved this problem before anyone else in the midwest. We had it ready to go from the beginning. Similarly, type out the code to read in a file since this will be required for every problem. Save this answer "template" someplace so you can quickly copy/paste it to your IDE at the beginning of each problem. There are no rules on programming anything before the competition starts so get any boilerplate code out the way. We found it useful to have one person who is on permanent whiteboard duty. This is usually the person who is best at math and at working out solutions to get a head start on future problems you will be doing. One person is on permanent programming duty. Your fastest/most skilled "programmer" (most familiar with the language). This will save debugging time also. The last person has several roles between assessing the packet of problems for the next "easiest" problem, helping the person on the whiteboard work out solutions and helping the person programming work out bugs/issues. This person needs to be flexible and be able to switch between roles easily.

    Read the article

  • Ubuntu 12.04 LTS initramfs-tools dependency issue

    - by Mike
    I know this has been asked several times, but each issue and resolution seems different. I've tried almost everything I could think of, but I can't fix this. I have a VM (VMware I think) running 12.04.03 LTS which has stuck dependencies. The VM is on a rented host, running a live system so I don't want to break it (further). uname -a Linux support 3.5.0-36-generic #57~precise1-Ubuntu SMP Thu Jun 20 18:21:09 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux Some more: sudo apt-get update [sudo] password for tracker: Reading package lists... Done Building dependency tree Reading state information... Done You might want to run ‘apt-get -f install’ to correct these. The following packages have unmet dependencies. initramfs-tools : Depends: initramfs-tools-bin (< 0.99ubuntu13.1.1~) but 0.99ubuntu13.3 is installed E: Unmet dependencies. Try using -f. sudo apt-get install -f Reading package lists... Done Building dependency tree Reading state information... Done Correcting dependencies... Done The following extra packages will be installed: initramfs-tools The following packages will be upgraded: initramfs-tools 1 upgraded, 0 newly installed, 0 to remove and 2 not upgraded. 2 not fully installed or removed. Need to get 0 B/50.3 kB of archives. After this operation, 0 B of additional disk space will be used. Do you want to continue [Y/n]? Y dpkg: dependency problems prevent configuration of initramfs-tools: initramfs-tools depends on initramfs-tools-bin (<< 0.99ubuntu13.1.1~); however: Version of initramfs-tools-bin on system is 0.99ubuntu13.3. dpkg: error processing initramfs-tools (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. dpkg: dependency problems prevent configuration of apparmor: apparmor depends on initramfs-tools; however: Package initramfs-tools is not configured yet. dpkg: error processing apparmor (--configure): dependency problems - leaving unconfigured No apport report written because the error message indicates it's a follow-up error from a previous failure. Errors were encountered while processing: initramfs-tools apparmor E: Sub-process /usr/bin/dpkg returned an error code (1) If I look at the policy behind initramfs-tools / bin I get: apt-cache policy initramfs-tools initramfs-tools: Installed: 0.99ubuntu13.1 Candidate: 0.99ubuntu13.3 Version table: 0.99ubuntu13.3 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages *** 0.99ubuntu13.1 0 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages apt-cache policy initramfs-tools-bin initramfs-tools-bin: Installed: 0.99ubuntu13.3 Candidate: 0.99ubuntu13.3 Version table: *** 0.99ubuntu13.3 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise-updates/main amd64 Packages 100 /var/lib/dpkg/status 0.99ubuntu13 0 500 http://gb.archive.ubuntu.com/ubuntu/ precise/main amd64 Packages So the issue seems to be I have 0.99ubuntu13.3 for initramfs-tools-bin yet 0.99ubuntu13.1 for initramfs-tools, and can't upgrade to 0.99ubuntu13.3. I've performed apt-get clean/autoclean/install -f/upgrade -f many times but they won't resolve. I can think of only 2 other 'solutions': Edit the dpkg dependency list to trick it into doing the installation with a broken dependency. This seems very dodgy and it would be a last resort Downgrade both initramfs-tools and initramfs-tools-bin to 0.99ubuntu13 from the precise/main sources and hope that would get them in step. However I'm not sure if this will be possible, or whether it would introduce more issues. I'm not sure how this situation arise in the first place. /boot was 96% full; it's now 56% full (it's tiny - 64MB ... this is what I got from the hosting company). Can anyone offer advice please?

    Read the article

  • What Counts For a DBA: Replaceable

    - by Louis Davidson
    Replaceable is what every employee in every company instinctively strives not to be. Yet, if you’re an irreplaceable DBA, meaning that the company couldn’t find someone else who could do what you do, then you’re not doing a great job. A good DBA is replaceable. I imagine some of you are already reaching for the lighter fluid, about to set the comments section ablaze, but before you destroy a perfectly good Commodore 64, read on… Everyone is replaceable, ultimately. Anyone, anywhere, in any job, could be sitting at their desk reading this, blissfully unaware that this is to be their last day at work. Morbidly, you could be about to take your terminal breath. Ideally, it will be because another company suddenly offered you a truck full of money to take a new job, forcing you to bid a regretful farewell to your current employer (with barely a “so long suckers!” left wafting in the air as you zip out of the office like the Wile E Coyote wearing two pairs of rocket skates). I’ve often wondered what it would be like to be present at the meeting where your former work colleagues discuss your potential replacement. It is perhaps only at this point, as they struggle with the question “What kind of person do we need to replace old Wile?” that you would know your true worth in their eyes. Of course, this presupposes you need replacing. I’ve known one or two people whose absence we adequately compensated with a small rock, to keep their old chair from rolling down a slight incline in the floor. On another occasion, we bought a noise-making machine that frequently attracted attention its way, with unpleasant sounds, but never contributed anything worthwhile. These things never actually happened, of course, but you take my point: don’t confuse replaceable with expendable. Likewise, if the term “trained seal” comes up, someone they can teach to follow basic instructions and push buttons in the right order, then the replacement discussion is going to be over quickly. What, however, if your colleagues decide they’ll need a super-specialist to replace you. That’s a good thing, right? Well, usually, in my experience, no it is not. It often indicates that no one really knows what you do, or how. A typical example is the “senior” DBA who built a system just before 16-bit computing became all the rage and then settled into a long career managing it. Such systems are often central to the company’s operations and the DBA very skilled at what they do, but almost impossible to replace, because the system hasn’t evolved, and runs on processes and routines that others no longer understand or recognize. The only thing you really want to hear, at your replacement discussion, is that they need someone skilled at the fundamentals and adaptable. This means that the person they need understands that their goal is to be an excellent DBA, not a specialist in whatever the-heck the company does. Someone who understands the new versions of SQL Server and can adapt the company’s systems to the way things work today, who uses industry standard methods that any other qualified DBA/programmer can understand. More importantly, this person rarely wants to get “pigeon-holed” and so documents and shares the specialized knowledge and responsibilities with their teammates. Being replaceable doesn’t mean being “dime a dozen”. The company might need four people to take your place due to the depth of your skills, but still, they could find those replacements and those replacements could step right in using techniques that any decent DBA should know. It is a tough question to contemplate, but take some time to think about the sort of person that your colleagues would seek to replace you. If you think they would go looking for a “super-specialist” then consider urgently how you can diversify and share your knowledge, and start documenting all the processes you know as if today were your last day, because who knows, it just might be.

    Read the article

  • Day 5 - Tada! My Game Menu Screen Graphics

    - by dapostolov
    So, tonight I took some time to mash up some graphics for my game menu screen. My artistic talent sucks...but here goes nothing...voila, my menu screen!! The Menu Screen The screen above is displaying 4 sprites, even though it looks like maybe 7... I guess one of the first things for me to test in the future is ... is it more memory efficient (and better frame rate) to draw one big background image OR tp paint the screen black, and place each sprite in set locations? To display the 4 sprites above, I borrowed my code from yesterday ... I know, tacky, but...I wanted to see it, feel it. Do you feel it? FEEL IT! (homer voice & shakes fist) Note: the menu items won't scale properly as it stands with this code, well pretty much they do nothing except look pretty... Paint.Net & Google Fun So how did I create that image above? Well, to create the background and 3 menu items I used Paint.Net. Basically, I scoured Google images for: a stone doorway, a stone pillar, an old book, a wizards hat, and...that's it pretty much it! I'll let you type in those searches and see if you can locate the images I used. I know, bad developer...but I figured since I modified the images considerably it doesn't count...well for a personal project it shouldn't count...*shrug* Anyhow, I extracted each key assest I wanted from each image and applied lots of matting, blurring, color changes, glow effects and such. Then, using my vivid imagination I placed / composed each of the layered assets into the mashed up the "scene" above. Pretty cool, eh? Hey, did you know, the cool mist effect is actually a fire rendition in Paint.net? I set it to black & white with opacity set next to nothing. I'm also very proud of the yellow "light" in the stone doorway. I drew that in and then applied gausian blur to it to give it the effect of light creeping out around the door and into the room...heheh. So did I achieve the dark, mysterious ritual as I stated in my design doc? I think I had a great stab at it! Maybe down the road I can get a real artist to crank out some quality graphics for the game... =) So, What's Next? Well, I don't have that animated brazier yet...however, I thought it would be even cooler if I can get that door pulsing that yellow light and it would be extremely cool to have the smoke / mist moving across the screen! Make the creative ideas stop!! (clutches head) haha! I'm having great fun working on this project =) I recommend others giving something like this a try, it's really fulfilling. OK. Tomorrow... I think I'm going to start creating some game / menu objects as per the design doc, maybe even get a custom mouse cursor up on the screen and handle a couple of mouse events, and lastly, maybe a feature to toggle a framerate display... D.

    Read the article

  • Two Candidates + One Job = Two Different Outcomes

    - by david.talamelli
    Recruiters have always headhunted (sidenote: I do not like this word, in general I think the type of people who use the phrase “headhunting” are the ones who are trying to sound more important than what they likely are). Any serious Recruiter engages in direct recruiting activity, it is part and parcel of the business it is not something unique. With the uptake in Social Media the past 4-5 years, we have seen an increase in the number of Recruiters proactively reaching out to people about job opportunities. We have also seen this activity increase across all levels of hire, from help desk roles to C-Level Executives. While getting approached about a role can be a nice boost to a person’s ego, do not let it give you an inflated sense of entitlement. It is The way that people handle themselves during these calls and subsequent interviews will have a large impact on their potential to land that job. Last week I spoke to two very different candidates, both about the same position and both with very different outcomes. On paper, Candidate #1 looked fantastic; they ticked many of the boxes that we were looking for. The person is working at global IT company and working in a similar role as the one we were hiring for but not in as senior as the role we had. This role would have been the perfect step to getting involved in more complex work for the person. Candidate #2 had less polished IT experience, ticked some of the boxes we were looking for and on paper in comparison to Candidate #1 was not as close a fit as Candidate #1 was. It seemed like I was comparing apples and oranges. After speaking to both candidates it turns out I was comparing apples and oranges except the person better suited for our role was not the one I was expecting it would be. The first candidate on paper looked great – they had the experience we were looking for and appeared to be just right for the role, but after talking to them, they gave me the impression that they thought the world owed them. The impression I was left with was that they did not equate success with hard work, they seemed more interested in “what is in it for me”. Rather than having a proper conversation with me, I was often cut off and asked to hurry it up when explaining our business, what we are doing, etc... . This person seemed more interested in the job title and money than how rather than think about ways to make the role successful. Candidate #2 who had limited experience, made up for any perceived lack of experience and them some with a demonstrated motivation to succeed and do the things needed to make that happen. Candidate #2 made a great first impression, they did not seem afraid of hard work and demonstrated a “team player” attitude. In talking to them they kept me engaged, listened and asked thoughtful questions that made me think this is the type of person who creates their own luck and who would thrive in a place like Oracle. Skills, capabilities, experience and a good resume can certainly get your foot in the door, but the wrong attitude or approach to work can close those opportunities just as easily. On the other hand, hard work, effort and a genuine work ethic may help open those doors that would otherwise closed for you. A resume with all the credentials gets you in the front door but that is just the beginning of the process. It is not how we start the race that is important, it’s how things end that matter most.

    Read the article

  • Critical Threads Optimization

    - by Rafael Vanoni
    Background One of the more common issues we've been seeing in the field is the growing difficulty in optimizing performance of multi-threaded applications. A good portion of this difficulty is due to the increasing complexity of modern processors that present various degrees of sharing relationships between hardware components. Take any current CMT processor and you'll find any number of CPUs sharing execution pipelines, floating point units, caches, etc. Consequently, applying the traditional recipe of one software thread for each CPU will have varying degrees of success, according to the layout of the underlying hardware. On top of this increasing complexity we've also seen processors with features that aim at dynamically resourcing software threads according to their utilization. Intel's Turbo Boost allows processors to increase their operating frequency if there is enough thermal headroom available and the processor isn't fully utilized. More recently, the SPARC T4 processor introduced dynamic threading, allowing each core to dynamically allocate more resources to its active CPUs. Both cases are in essence recognizing that current processors will be running a wide mix of workloads, some will be designed for throughput, others for low latency. The hardware is providing mechanisms to dynamically resource threads according to their runtime behavior. We're very aware of these challenges in Solaris, and have been working to provide the best out of box performance while providing mechanisms to further optimize applications when necessary. The Critical Threads Optimzation was introduced in Solaris 10 8/11 and Solaris 11 as one such mechanism that allows customers to both address issues caused by contention over shared hardware resources and explicitly take advantage of features such as T4's dynamic threading. What it is The basic idea is to allow performance critical threads to execute with more exclusive access to hardware resources. For example, when deploying an application that implements a producer/consumer model, it'll likely be advantageous to give the producer more exclusive access to the hardware instead of having it competing for resources with all the consumers. In the case of a T4 based system, we may want to have a producer running by itself on a single core and create one consumer for each of the remaining CPUs. With the Critical Threads Optimization we're extending the semantics of scheduling priorities (which thread should run first) to include priority over shared resources (which thread should have more "space"). Now the scheduler will not only run higher priority threads first: it will also provide them with more exclusive access to hardware resources if they are available. How does it work ? Using the previous example in Solaris 11, all you'd have to do would be to place the producer in the Fixed Priority (FX) scheduling class at priority 60, or in the Real Time (RT) class at any priority and Solaris will try to give it more "hardware space". On both Solaris 10 8/11 and Solaris 11 this can be achieved through the existing priocntl(1,2) and priocntlset(2) interfaces. If your application already assigns these priorities to performance critical threads, there's no additional step you need to take. One important aspect of this optimization is that it requires some level of idleness in the system, either as a result of sizing the application before hand or through periods of transient idleness during runtime. If the system is fully committed, the scheduler will put all the available CPUs to work.Best practices If you're an application developer, we encourage you to look into assigning the right priorities for the different threads in your application. Solaris provides different scheduling classes (Time Share, Interactive, Fair Share, Fixed Priority and Real Time) that offer different policies and behaviors. It is not always simple to figure out which set of threads are critical to the performance of a workload, and it may not always be feasible to take advantage of this optimization, but we believe that this can be correctly (and safely) done during development. Overall, the out of box performance in Solaris should meet your workload's requirements. If you are looking into that extra bit of performance, then the Critical Threads Optimization may be what you're looking for.

    Read the article

  • Is Cloud Security Holding Back Social SaaS?

    - by Mike Stiles
    The true promise of social data co-mingling with enterprise data to influence and inform social marketing (all marketing really) lives in cloud computing. The cloud brings processing power, services, speed and cost savings the likes of which few organizations could ever put into action on their own. So why wouldn’t anyone jump into SaaS (Software as a Service) with both feet? Cloud security. Being concerned about security is proper and healthy. That just means you’re a responsible operator. Whether it’s protecting your customers’ data or trying to stay off the radar of regulatory agencies, you have plenty of reasons to make sure you’re as protected from hacking, theft and loss as you can possibly be. But you also have plenty of reasons to not let security concerns freeze you in your tracks, preventing you from innovating, moving the socially-enabled enterprise forward, and keeping up with competitors who may not be as skittish regarding SaaS technology adoption. Over half of organizations are transferring sensitive or confidential data to the cloud, an increase of 10% over last year. With the roles and responsibilities of CMO’s, CIO’s and other C’s changing, the first thing you should probably determine is who should take point on analyzing cloud software options, providers, and policies. An oft-quoted Ponemon Institute study found 36% of businesses don’t have a cloud security policy at all. So that’s as good a place to start as any. What applications and data are you comfortable housing in the cloud? Do you have a classification system for data that clearly spells out where data types can go and how they can be used? Who, both internally and at the cloud provider, will function as admins? What are the different levels of admin clearance? Will your security policies and procedures sync up with those of your cloud provider? The key is verifiable trust. Trust in cloud security is actually going up. 1/3 of organizations polled say it’s the cloud provider who should be responsible for data protection. And when you look specifically at SaaS providers, that expectation goes up to 60%. 57% “strongly agree” or “agree” there’s more confidence in cloud providers’ ability to protect data. In fact, some businesses bypass the “verifiable” part of verifiable trust. Just over half have no idea what their cloud provider does to protect data. And yet, according to the “Private Cloud Vision vs. Reality” InformationWeek Report, 82% of organizations say security/data privacy are one of the main reasons they’re still holding the public cloud at arm’s length. That’s going to be a tough position to maintain, because just as social is rapidly changing the face of marketing, big data is rapidly changing the face of enterprise IT. Netflix, who’s particularly big on the benefits of the cloud, says, "We're systematically disassembling the corporate IT components." An enterprise can never realize the full power of big data, nor get the full potential value out of it, if it’s unwilling to enable the integrations and dataset connections necessary in the cloud. Because integration is called for to reduce fragmentation, a standardized platform makes a lot of sense. With multiple components crafted to work together, you’re maximizing scalability, optimization, cost effectiveness, and yes security and identity management benefits. You can see how the incentive is there for cloud companies to develop and add ever-improving security features, making cloud computing an eventual far safer bet than traditional IT. @mikestilesPhoto: stock.xchng

    Read the article

  • Get the Picture: Pinterest for Marketers

    - by Mike Stiles
    When trying to determine on which networks to conduct social marketing, the usual suspects immediately rise to the top; Facebook & Twitter, then LinkedIn (especially if you’re B2B), then maybe some Google Plus to hedge SEO bets.  So at what juncture do brands get excited about Pinterest? Pinterest has been easy for marketers to de-prioritize thanks to the perception its usage is so dominated by women. Um, what’s wrong with that? Women make an estimated 85% of all consumer purchases. So if there are indeed over 30 million US women active on it monthly, and they do 92% of the pinning, and 84% are still active on it after 4 years, when did an audience of highly engaged, very likely sales conversions become low priority? Okay, if you’re a tech B2B SaaS product like the Oracle Social Cloud, Pinterest may not be where you focus. But if you operate in the top Pinterest categories, which are truly far-reaching, it’s time to take note of Pinterest’s performance to date: 40.1 million monthly users in the US (eMarketer). Over 30 billion pins, half of which were pinned in the last 6 months. (Big momentum) 75% of usage is on their mobile app. (In solid shape for the mobile migration) Pinterest sharing grew 58% in 2013, beating Facebook, Twitter, or LinkedIn. (ShareThis) Pinterest is the 3rd most popular sharing platform overall (over email), with 48% of all sharing on tablets. Users referred by Pinterest are 10% more likely to buy on e-commerce sites and tend to spend twice that of users coming from Facebook. (Shopify) To be fair, brands haven’t had any paid marketing opportunities on that platform…until recently. Users are seeing Promoted Pins in both category and search feeds from rollout brands like Gap, ABC Family, Ziploc, and Nestle. Are the paid pins annoying users? It seems more so than other social networks, they’re fitting right in to the intended user experience and being accepted, getting almost as many click-throughs as user pins. New York Magazine’s Kevin Roose laid it out succinctly; Pinterest offers a place that’s image-centric, search-friendly, makes things easy to purchase, makes things easy to share, and puts users in an aspirational mood to buy. Pinterest is very confident in the value of that combo and that audience, with CPM rates 5x that of the most expensive Facebook ad, plus (at least for now) required spending commitments and required pin review by Pinterest for quality. The latest developments; a continued move toward search and discovery with enhancements like Guided Search to help you hone in on what interests you, Custom Categories, and the rumored Visual Search that stands to be a liberation from text. And most recently, Pinterest has opened up its API so brands can get access to deeper insights into the best search terms and categories in which to play ball, as well as what kinds of pins stand to perform best in those areas. As we learned in our rundown this week of Social Media Examiner’s Social Media Marketing Industry Report, around 50% of marketers specifically intend on upping their use of Pinterest. If you’re a big believer in fishing where the fish are, that’s probably an efficient position to take. @mikestiles @oraclesocialPhoto: Adam Lambert_Gorwyn, freeimages.com

    Read the article

  • Custom Templates: Using user exits

    - by Anthony Shorten
    One of the features of Oracle Utilities Application Framework V4.1 is the ability to use templates and user exits to extend the base configuration files. The configuration files used by the product are based upon a set of templates shipped with the product. When the configureEnv utility asks for configuration settings they are stored in a configuration file ENVIRON.INI which outlines the environment settings. These settings are then used by the initialSetup utility to populate the various configuration files used by the product using templates located in the templates directory of the installation. Now, whilst the majority of the installations at any site are non-production and the templates provided are generally adequate for that need, there are circumstances where extension of templates are needed to take advantage of more advanced facilities (such as advanced security and environment settings). The issue then becomes that if you alter the configuration files manually (directly or indirectly) then you may lose all your custom settings the next time you run initialSetup. To counter this we allow customers to either override templates with their own template or we now provide user exits in the templates to add fragments of configuration unique to that part of the configuration file. The latter means that the base template is still used but additions are included to provide the extensions. The provision of custom templates is supported but as soon as you use a custom template you are then responsible for reflecting any changes we put in the base template over time. Not a big task but annoying if you have to do it for multiple copies of the product. I prefer to use user exits as they seem to represent the least effort solution. The way to find the user exits available is to either read the Server Administration Guide that comes with your product or look at individual templates and look for the lines: #ouaf_user_exit <user exit name> Where <user exit name> is the name of the user exit. User exits are not always present but are in places that we feel are the most likely to be changed. If a user exit does not exist the you can always use a custom template instead. Now lets show an example. By default, the product generates a config.xml file to be used with Oracle WebLogic. This configuration file has the basic setting contained in it to manage the product. If you want to take advantage of the Oracle WebLogic advanced settings, you can use the console to make those changes and it will be reflected in the config.xml automatically. To retain those changes across invocations of initialSetup, you need to alter the template that generates the config.xml or use user exits. The technique is this. Make the change in the console and when you save the change, WebLogic will reflect it in the config.xml for you. Compare the old version and new version of the config.xml and determine what to add and then find the user exit to put it in by examining the base template. For example, by default, the console is not automatically deployed (it is deployed on demand) in the base config.xml. To make the console deploy, you can add the following line to the templates/CM_config.xml.win.exit_3.include file (for windows) or templates/CM_config.xml.exit_3.include file (for linux/unix): <internal-apps-deploy-on-demand-enabled>false</internal-apps-deploy-on-demand-enabled> Now run initialSetup to reflect the change and if you check the splapp/config/config.xml file you will see the change applied for you. Now how did I know which include file? I check the template for config.xml and found there was an user exit at the right place. I prefixed my include filename with "CM_" to denote it as a custom user exit. This will tell the upgrade tools to leave that file alone whenever you decide to upgrade (or even apply fixes). User exits can be powerful and allow customizations to be added for advanced configuration. You will see products using Oracle Utilities Application Framework use this exits themselves (usually prefixed with the product code). You are also taking advantage of them.

    Read the article

< Previous Page | 543 544 545 546 547 548 549 550 551 552 553 554  | Next Page >