Search Results

Search found 7263 results on 291 pages for 'job boards'.

Page 220/291 | < Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >

  • View the Real Links Behind Shortened URLs in Chrome

    - by Asian Angel
    When you encounter shortened URLs there is always that worry in the back of your mind about where they really lead to. Now you can get a “sneak peak” at the real links behind those URLs with the View Thru extension for Google Chrome. The URL Shortening services officially supported at this time are: bit.ly, cli.gs, ff.im, goo.gl, is.gd, nyti.ms, ow.ly, post.ly, su.pr, & tinyurl.com. Before When you encounter a shortened URL you are pretty much on your own in deciding whether to trust that link or not. It would really be nice if you could just hover your mouse over those links and know where they will lead ahead of time. After Once you have the extension installed you are ready to access that link viewing goodness. Please note that you will need to reload any pages that were open prior to installing the extension. For our first example we chose a shortened URL from “bit.ly”. As you can see the entire link behind the shortened URL is displayed very nicely…no hidden surprises there! Note: There are no options to worry with for the extension. Another perfect result for the “goo.gl URL” shown below. View Thru will certainly remove a lot of the stress related to clicking on shortened URLs. Bonus Find Just out of curiosity we looked for a shortened URL not listed as being officially supported at this time. We found one with the “http://nyti.ms/” domain and View Thru showed the link perfectly…so be sure to give it a try on other services too. Conclusion If you worry about where a shortened URL will really lead you then the View Thru extension can help alleviate that stress. Links Download the View Thru extension (Google Chrome Extensions) Similar Articles Productive Geek Tips See Where Shortened URLs “Link To” in Your Favorite BrowserVerify the Destinations of Shortened URLs the Easy WayCreate Shortened goo.gl URLs in Google Chrome the Easy WayCreate Shortened goo.gl URLs in Your Favorite BrowserAccess Google Chrome’s Special Pages the Easy Way TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

  • Collision detection when pathfinding with pathnodes, UDK

    - by Dave Voyles
    I'm trying to create a class that allows my AIController to path find using pathnodes (NOT NavMeshes). It's doing a swell job of going from point to point in a set order (although I would like for it to be a random patrol at some point), but it gets caught up on collision from time to time. I.E. He'll walk the same set path, and when he runs into the blocks in the middle of the map he continues to rub against them until they finish, and continues on his merry way to the next path node. How can I prevent this from happening, or at least have him move from the wall if he does a trace and detects that it is there? It looks like I need to use MoveToward() instead of MoveTo(), as MoveToward allows the pawn to adjust its course during movement. I'm just not sure of how to use those paramters. Mougli has a decent tutorial on it[/URL], but I can't seem to get it to work correctly with my pathnode array. class PathfindingAIController extends UDKBot; var array Waypoints; var int _PathNode; //declare it at the start so you can use it throughout the script var int CloseEnough; simulated function PostBeginPlay() { local PathNode Current; super.PostBeginPlay(); //add the pathnodes to the array foreach WorldInfo.AllActors(class'Pathnode',Current) { Waypoints.AddItem( Current ); } } simulated function Tick(float DeltaTime) { local int Distance; local Rotator DesiredRotation; super.Tick(DeltaTime); if (Pawn != None) { // Smoothly rotate the pawn towards the focal point DesiredRotation = Rotator(GetFocalPoint() - Pawn.Location); Pawn.FaceRotation(RLerp(Pawn.Rotation, DesiredRotation, 3.125f * DeltaTime, true), DeltaTime); } Distance = VSize2D(Pawn.Location - Waypoints[_PathNode].Location); if (Distance <= CloseEnough) { _PathNode++; } if (_PathNode >= Waypoints.Length) { _PathNode = 0; } GoToState('Pathfinding'); } auto state Pathfinding { Begin: if (Waypoints[_PathNode] != None) // make sure there is a pathnode to move to { MoveTo(Waypoints[_PathNode].Location); //move to it `log("STATE: Pathfinding"); } } DefaultProperties { CloseEnough=400 bIsplayer = True }

    Read the article

  • Best practices for logging and tracing in .NET

    - by Levidad
    I've been reading a lot about tracing and logging, trying to find some golden rule for best practices in the matter, but there isn't any. People say that good programmers produce good tracing, but put it that way and it has to come from experience. I've also read similar questions in here and through the internet and they are not really the same thing I am asking or do not have a satisfying answer, maybe because the questions lack some detail. So, folks say that tracing should sort of replicate the experience of debugging the application in cases where you can't attach a debugger. It should provide enough context so that you can see which path is taken at each control point in the application. Going deeper, you can even distinguish between tracing and event logging, in that "event logging is different from tracing in that it captures major states rather than detailed flow of control". Now, say I want to do my tracing and logging using only the standard .NET classes, those in the System.Diagnostics namespace. I figured that the TraceSource class is better for the job than the static Trace class, because I want to differentiate among the trace levels and using the TraceSource class I can pass in a parameter informing the event type, while using the Trace class I must use Trace.WriteLineIf and then verify things like SourceSwitch.TraceInformation and SourceSwitch.TraceErrors, and it doesn't even have properties like TraceVerbose or TraceStart. With all that in mind, would you consider a good practice to do as follows: Trace a "Start" event when begining a method, which should represent a single logical operation or a pipeline, along with a string representation of the parameter values passed in to the method. Trace an "Information" event when inserting an item into the database. Trace an "Information" event when taking one path or another in an important if/else statement. Trace a "Critical" or "Error" in a catch block depending on weather this is a recoverable error. Trace a "Stop" event when finishing the execution of the method. And also, please clarify when best to trace Verbose and Warning event types. If you have examples of code with nice trace/logging and are willing to share, that would be excelent. Note: I've found some good information here, but still not what I am looking for: http://msdn.microsoft.com/en-us/magazine/ff714589.aspx Thanks in advance!

    Read the article

  • SQL Server PowerShell Provider follows the Version of PowerShell on the Host and other errata

    - by BuckWoody
    There may be some misunderstanding on how the PowerShell Provider for SQL Server works. I’ve written an article or two explaining that you can use PowerShell with SQL Server, without having the SQL Server 2008 (or higher) provider around. After all, PowerShell just uses .NET, and SQL Server “Server Management Objects” or SMO listen to that interface as well. In SQL Server 2008 and higher we created a “MiniShell” for PowerShell that gives you the ability to treat a SQL Server Instance as a drive (called a “Provider” or path or drive) and a few commands (called command-lets). Using these two simple constructs you can move around SQL Server quickly and work with the objects it holds. I read the other day where someone stated that we had “re-compiled” PowerShell, so that you would have version 1.0 from SQL Server and 2.0 on your new server. Not so! Drop to a SQLPS prompt and a PowerShell prompt and type this in each: $PSVersionTable They should return the same value. You can think of a MiniShell as simply a compiled “profile” that gives you those providers and command-lets automatically – that’s all. In fact, you can load the SMO libraries yourself without the SQL Server 2008 Provider anywhere in sight. I do this all the time, since the MiniShell also has other restrictions. Also remember that if you run a PowerShell script as a SQL Agent Job step type (in 2008 and higher) that you’re running under the context of the account that starts Agent – I think most folks know this, but it’s good to keep in mind. There’s a re-written section of Books Online that goes over working with this very nicely – also covers the question “How to I connect to another server using the SQL Server PowerShell Provider” (hint: It’s just CD) and “How do I load all the SMO stuff if I don’t want to use the Provider” and more. Be sure and check out the note at the bottom that explains the firewall exceptions you’ll need to enable to CD to that remote server. Here’s that link: http://msdn.microsoft.com/en-us/library/cc281947.aspx Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • A Slice of Raspberry Pi

    - by Phil Factor
    Guest editorial for the ITPro/SysAdmin newsletter The Raspberry Pi Foundation has done a superb design job on their new $35 network-enabled Linux computer. This tiny machine, incorporating an ARM processor on a Broadcom BCM2835 multimedia chip, aims to put the fun back into learning computing. The public response has been overwhelmingly positive.Note that aim: "…to put the fun back". Education in Information Technology is in dire straits. It always has been, but seems to have deteriorated further still, even in the face of improved provision of equipment.In many countries, the government controls the curriculum. It predicted a shortage in office-based IT skills, and so geared the ICT curriculum toward mind-numbing training in word-processing and spreadsheet skills. Instead, the shortage has turned out to be in people with an engineering-mindset, who can solve problems with whatever technologies are available and learn new techniques quickly, in a rapidly-changing field.In retrospect, the assumption that specific training was required rather than an education was an idiotic response to the arrival of mainstream information technology. As a result, ICT became a disaster area, which discouraged a generation of youngsters from a career in IT, and thereby led directly to the shortage of people with the skills that are required to exploit the potential of Information Technology..Raspberry Pi aims to reverse the trend. This is a rig that is geared to fast graphics in high resolution. It is no toy. It should be a superb games machine. However, the use of Fedora, Debian, or Arch Linux ARM shows the more serious educational intent behind the Foundation's work. It looks like it will even do some office work too!So, get hold of any power supply that provides a 5VDC source at the required 700mA; an old Blackberry charger will do or, alternatively, it will run off four AA cells. You'll need a USB hub to support the mouse and keyboard, and maybe a hard drive. You'll want a DVI monitor (with audio out) or TV (sound and video). You'll also need to be able to cope with wired Ethernet 10/100, if you want networking.With this lot assembled, stick the paraphernalia on the back of the HDTV with Blu Tack, get a nice keyboard, and you have a classy Linux-based home computer. The major cost is in the T.V and the keyboard. If you're not already writing software for this platform, then maybe, at a time when some countries are talking of orders in the millions, you should consider it.

    Read the article

  • As a web designer, which language should I learn first for my feature career? (PHP or JavaScript) [closed]

    - by kdevs3
    Possible Duplicates: Best Programming Language for Web Development How can I choose a web development language? What language will you choose if you are going to build something big? What is the right option of programming languages and tools for building our website? What is the easiest web programing language at....? Well, I'm more of a basic web designer. I know the easy stuff pretty well. (Ya know, html, css) But I've been trying to take it to the next step and I'm contemplating about what I should learn that will help me out the most in my future web design/programming career, should it be JavaScript or maybe I should try to learn a back end programming language such as PHP. Lately, I have been hearing about a lot how JavaScript is so great and useful now, because of libraries such as jQuery and what possibility's it can bring by using Node.js and other frameworks. I've only learned the most basic of JavaScript and used some jQuery (mostly plugins) so i wouldn't know at all of what it can actually do. Would JS being so popular as it is now and useful, be a reason to stick with JavaScript and only learn it that for now? Or as a web designer, how important would it be to learn how to make a web application/website operate and functional, and know how to work with servers, etc? (Such as getting forms to work and sending data to the server and back) I've took a look at frameworks such as Code Igniter before, and looks really simple to get started with if I try to learn PHP, But I'm not sure how important it is for my career and what I would gain out of it. I'm asking because I can't decide what I should learn first. When I select it, I really want to take my time and learn the language. I don't want to spend time on learning multiple languages at the same time, so I need to pick wisely. I'm trying to turn the right direction so my career can hopefully be successful in the feature. (If money/gaining a job asked if its important, then its a yeah, it is a bit) I'm hoping I can get opinions and suggestions on this question, thanks for giving me your thoughts also.

    Read the article

  • Closing the Gap: 2012 IOUG Enterprise Data Security Survey

    - by Troy Kitch
    The new survey from the Independent Oracle Users Group (IOUG) titled "Closing the Security Gap: 2012 IOUG Enterprise Data Security Survey," uncovers some interesting trends in IT security among IOUG members and offers recommendations for securing data stored in enterprise databases. "Despite growing threats and enterprise data security risks, organizations that implement appropriate detective, preventive, and administrative safeguards are seeing significant results," finds the report's author, Joseph McKendrick, analyst, Unisphere Research. Produced by Unisphere Research and underwritten by Oracle, the report is based on responses from 350 IOUG members representing a variety of job roles, organization sizes, and industry verticals. Key findings include Corporate budgets increase, but trailing. Though corporate data security budgets are increasing this year, they still have room to grow to reach the previous year’s spending. Additionally, more than half of respondents say their organizations still do not have, or are unaware of, data security plans to help address contingencies as they arise. Danger of unauthorized access. Less than a third of respondents encrypt data that is either stored or in motion, and at the same time, more than three-fifths say they send actual copies of enterprise production data to other sites inside and outside the enterprise. Privileged user misuse. Only about a third of respondents say they are able to prevent privileged users from abusing data, and most do not have, or are not aware of, ways to prevent access to sensitive data using spreadsheets or other ad hoc tools. Lack of consistent auditing. A majority of respondents actively collect native database audits, but there has not been an appreciable increase in the implementation of automated tools for comprehensive auditing and reporting across databases in the enterprise. IOUG RecommendationsThe report's author finds that securing data requires not just the ability to monitor and detect suspicious activity, but also to prevent the activity in the first place. To achieve this comprehensive approach, the report recommends the following. Apply an enterprise-wide security strategy. Database security requires multiple layers of defense that include a combination of preventive, detective, and administrative data security controls. Get business buy-in and support. Data security only works if it is backed through executive support. The business needs to help determine what protection levels should be attached to data stored in enterprise databases. Provide training and education. Often, business users are not familiar with the risks associated with data security. Beyond IT solutions, what is needed is a well-engaged and knowledgeable organization to help make security a reality. Read the IOUG Data Security Survey Now.

    Read the article

  • Tellago releases a RESTful API for BizTalk Server business rules

    - by Charles Young
    Jesus Rodriguez has blogged recently on Tellago Devlabs' release of an open source RESTful API for BizTalk Server Business Rules.   This is an excellent addition to the BizTalk ecosystem and I congratulate Tellago on their work.   See http://weblogs.asp.net/gsusx/archive/2011/02/08/tellago-devlabs-a-restful-api-for-biztalk-server-business-rules.aspx   The Microsoft BRE was originally designed to be used as an embedded library in .NET applications. This is reflected in the implementation of the Rules Engine Update (REU) Service which is a TCP/IP service that is hosted by a Windows service running locally on each BizTalk box. The job of the REU is to distribute rules, managed and held in a central database repository, across the various servers in a BizTalk group.   The engine is therefore distributed on each box, rather than exploited behind a central rules service.   This model is all very well, but proves quite restrictive in enterprise environments. The problem is that the BRE can only run legally on licensed BizTalk boxes. Increasingly we need to deliver rules capabilities across a more widely distributed environment. For example, in the project I am working on currently, we need to surface decisioning capabilities for use within WF workflow services running under AppFabric on non-BTS boxes. The BRE does not, currently, offer any centralised rule service facilities out of the box, and hence you have to roll your own (and then run your rules services on BTS boxes which has raised a few eyebrows on my current project, as all other WCF services run on a dedicated server farm ).   Tellago's API addresses this by providing a RESTful API for querying the rules repository and executing rule sets against XML passed in the request payload. As Jesus points out in his post, using a RESTful approach hugely increases the reach of BRE-based decisioning, allowing simple invocation from code written in dynamic languages, mobile devices, etc.   We developed our own SOAP-based general-purpose rules service to handle scenarios such as the one we face on my current project. SOAP is arguably better suited to enterprise service bus environments (please don't 'flame' me - I refuse to engage in the RESTFul vs. SOAP war). For example, on my current project we use claims based authorisation across the entire service bus and use WIF and WS-Federation for this purpose.   We have extended this to the rules service. I can't release the code for commercial reasons :-( but this approach allows us to legally extend the reach of BRE far beyond the confines of the BizTalk boxes on which it runs and to provide general purpose decisioning capabilities on the bus.   So, well done Tellago.   I haven't had a chance to play with the API yet, but am looking forward to doing so.

    Read the article

  • What changes were made to a document

    - by Daniel Moth
    Part of my job is writing functional specs. Due to the inevitable iterative and incremental nature of software design/development, these specs need to be updated with additions/deletions/changes over a period of time. When the time comes for a developer to implement features or update their design document (or a tester to test the feature or update their test specs) they need to be doing that against the latest spec. The problem is that if they have reviewed this document already, they need a quick way to find the delta from the last time they reviewed it to see what changes exist and how their existing plans may be affected (instead of having to read the entire document again). Doing that is very easy assuming your Word documents are hosted on SharePoint. 1. Every time you review a document note the SharePoint version and/or date (if it is a printed copy, make sure your printout includes the date in the footer – all my specs do) 2. When you need to see what changed, open the document (make sure you are not using a cached or local offline copy) and on the ribbon go to the "Review" tab and then  click on the "Compare" button. 3. Click on the "Specific Version…" option. In the dialog that pops up pick the last version you reviewed and click the "Compare" button. [TIP for authors: before checkin of your document, always compare against the "Last Version" on the SharePoint so you can add appropriate more complete check in comments] 4. What you see now is that in addition to the document you have open, two other documents just opened up. One is in the background (flashing on your task bar) – close that one as it is the old version. 5. The other document is in the foreground and contains all the changes between the old version and the latest one. Be sure not to make edits to this document, use it only for reading the changes. To find all the changes, on the ribbon under the "Review" tab, click on the "Reviewing Pane" to open the reviewing pane on the left. You can now click on each pink change to see what it is. 6. When you are done reviewing changes close the document and don't save any changes (remember if you want to make edits/additions/comments make them in the original document which is still open). And now I have a URL to point to people that keep asking about this – enjoy  :-) Comments about this post welcome at the original blog.

    Read the article

  • Leading an offshore team

    - by Chuck Conway
    I'm in a position where I am leading two teams of 4. Both teams are located in India. I am on the west coast of the U.S. I'm finding leading remote teams challenging: First, their command of the English language is weak. Second, I'm having difficultly understanding them through their accents. Third is timing, we are 12 hours apart. We use Skype to communicate. I have a month to get the project done. We've burned through a week just setting up the environments. At this point I'm considering working their hours, 11p PDT to 7a PDT, to get them up to speed, so that I can get the project off the ground. A 12 hour lag time is too much. I'm looking for steps I can take to be successful at leading an offshore team. Update The offshore team's primary task is coding, of course, most coding tasks do involve some design work. The offshore team's are composed of one lead, 2 mid level (4 to 5 years) developers and a junior (~2 years) developer. The project is classic waterfall. We've handed the offshore team a business and a technical design document. We are trying to manage the offshore in an agile way. We have daily conference calls with them and I'm requiring the teams to send me a daily scrum in the form of an email answering the following questions: What did I do today? What am I going to do tomorrow? What do I need from Chuck so I can do my job tomorrow? There is some ambiguity in the tasks. The intent was to give them enough direction for them to develop the task with out writing the code for them. I don't have a travel budget. I am using Fogbugz to track the tasks. Each task has been entered into Fogbugz and given a priority. Each team member has access to FogBugz and can choose what task they wish to complete. Related question: What can we do to improve the way outsourcing/offshoring works? Update 2 I've decided that I can not talk to the team once a day. I must work with them. Starting tonight I've started working the same hours they are. This makes me available to them when they have questions. It also allows me to gain their trust and respect. Stackoverflow question Leading an offshore team

    Read the article

  • Hello From South Florida

    - by Sam Abraham
    Fellow Blog Readers: I figured I use my first blog post on GeeksWithBlogs to introduce myself.   I recently relocated from Long Island, NY to South Florida where I joined a local company as Software Engineer specializing in technologies such as C#, ASP.Net 3.5, WCF, Silverlight, SQL Server 2008 and LINQ, to name a few. I am an MCP and MCTS ASP.Net 3.5, looking to get my .Net 4.0 certification soon.   Having been in industry for a few years so far, I figured I would share with you my take on the importance of being involved(at least attending) in local user groups.   I am a firm believer that besides using a certain technology, the best way to expand one’s knowledge is by sharing it with others and being equally open to learn from others just as much as you are willing to share what you know.   In my opinion, an important factor that makes a good developer stand-out is his/her ability to keep abreast with the latest and greatest even in areas outside his/her direct expertise.   Additionally, having spoken to various recruiters, technical user group attendees are always favorably looked upon as genuinely interested in their field and willing to take the initiative to expand their knowledge which offers job candidates good leverage when competing for jobs.   I believe I am very blessed to be in an area with a very strong and vibrant developer community. I found in the local .Net community leadership a genuine interest in constantly extending the opportunity to all developers to get more involved and encouraging those who are willing to take that initiative achieve their goal: Speak in meetings, volunteer at events or write and publish articles/blogs about latest and greatest technologies.   With Vishal Shukla (Site director for the West Palm Beach .Net User Group) traveling overseas, I have been extended the opportunity to come on board as site coordinator for FladotNet's WPB .Net User Group along with Venkata Subramanian, an opportunity which I gratefully accepted.   Being involved in running a .Net User Group will surely help me personally and professionally, but my real hope is to use this opportunity to assist in delivering the ultimate common-goal: spread the word about new .Net Technologies, help everybody get more involved and simply have fun learning new things.   With my introduction out of the way, in the next few days I will be posting some notes on an upcoming talk I will be giving about MVC2 and VS2010 in mid-April.   Environment.Exit(0); --Sam

    Read the article

  • Review: Backbone.js Testing

    - by george_v_reilly
    Title: Backbone.js Testing Author: Ryan Roemer Rating: $stars(4.5) Publisher: Packt Copyright: 2013 ISBN: 178216524X Pages: 168 Keywords: programming, testing, javascript, backbone, mocha, chai, sinon Reading period: October 2013 Backbone.js Testing is a short, dense introduction to testing JavaScript applications with three testing libraries, Mocha, Chai, and Sinon.JS. Although the author uses a sample application of a personal note manager written with Backbone.js throughout the book, much of the material would apply to any JavaScript client or server framework. Mocha is a test framework that can be executed in the browser or by Node.js, which runs your tests. Chai is a framework-agnostic TDD/BDD assertion library. Sinon.JS provides standalone test spies, stubs and mocks for JavaScript. They complement each other and the author does a good job of explaining when and how to use each. I've written a lot of tests in Python (unittest and mock, primarily) and C# (NUnit), but my experience with JavaScript unit testing was both limited and years out of date. The JavaScript ecosystem continues to evolve rapidly, with new browser frameworks and Node packages springing up everywhere. JavaScript has some particular challenges in testing—notably, asynchrony and callbacks. Mocha, Chai, and Sinon meet those challenges, though they can't take away all the pain. The author describes how to test Backbone models, views, and collections; dealing with asynchrony; provides useful testing heuristics, including isolating components to reduce dependencies; when to use stubs and mocks and fake servers; and test automation with PhantomJS. He does not, however, teach you Backbone.js itself; for that, you'll need another book. There are a few areas which I thought were dealt with too lightly. There's no real discussion of Test-driven_development or Behavior-driven_development, which provide the intellectual foundations of much of the book. Nor does he have much to say about testability and how to make legacy code more testable. The sample Notes app has plenty of testing seams (much of this falls naturally out of the architecture of Backbone); other apps are not so lucky. The chapter on automation is extremely terse—it could be expanded into a very large book!—but it does provide useful indicators to many areas for exploration. I learned a lot from this book and I have no hesitation in recommending it. Disclosure: Thanks to Ryan Roemer and Packt for a review copy of this book.

    Read the article

  • Why a graduate program in South Africa?

    - by anca.rosu
    South Africa, like many other countries, is desperate for skills. Good, solid, technical skills – together with a get-up-and-go attitude – and the desire to work for a world-class organization that is leading the way! In addition, we have made a commitment in South Africa that we need to transform our organization and develop and empower Black individuals who historically have not had the opportunity to participate in the global economy. It is through this investment in our country's people that we contribute to the development of a nation capable of competing on the global stage. This makes for an exciting recipe! We have: Plenty of young and talented individuals who are eager to get stuck in and learn. Formal, recognized qualifications that form the basis for further development. A huge big global organization – Oracle – that is committed to developing these graduates and giving them an opportunity that is out of this world! Mix the above ‘ingredients’ together Tackle and remove potential “lumps & bumps” along the way as we learn and grow together Nurture and care for each other in a warm but tough environment What have we achieved? In most cases, the outcome is an awesome bunch of new talent that is well equipped to face the IT world. Where we have the opportunity and suitable headcount available to employ these graduates at Oracle we snap them up – alternatively our business partners and customers are always eager to recruit Oracle graduates into their organizations! These individuals go through real-life work place experience whilst at Oracle. In some cases they get to travel internationally. The excitement and buzz gets into their system and their blood becomes truly RED! Oracle RED! This is valuable talent and expertise to have in our eco-system and it’s an exciting program to be a part of not only as a graduate but as an Oracle employee too!   If you have any questions related to this article feel free to contact  [email protected].  You can find our job opportunities via http://campus.oracle.com. Technorati Tags: South Africa,technical skills,graduate program,opportunity,global organization,new talent

    Read the article

  • Preview Links and Images in Google Chrome

    - by Asian Angel
    Anyone who has used the CoolPreviews extension in Firefox knows how wonderful that preview window can be. Now you can get the same kind of functionality in Chrome with the ezLinkPreview extension. Note: Extension will not work on websites containing “frame buster” code (navigation to the actual URL will occur). Before Normally if you want to have a better look at a particular webpage the only option you have is to go ahead and open it in a new tab or window. But it would certainly be nice to be able to take a quick “sneak peek” before-hand… After As soon as you have finished installing the extension everything is ready to go…just refresh any pages open prior to installation and enjoy the preview goodness. When you hover your mouse near any link you will notice a small “Preview Button” appear with the letters “EZ” inside. A closer look at the “Preview Button”. Click on the “Preview Button” to open the popup window. Now you can get a very good idea of whether the page is worth visiting or not. Here is a closer look at the popup window. Notice that you can see the URL for the webpage and access a convenient set of buttons on the right side (Open to new tab, Pin to keep overlay open, and Close). You can even resize the window as desired to best suit your needs (you can actually grab any of the four corners to resize the popup window). It is also possible to open a “preview window” inside the popup window…you can see the “Preview Button” here… If you have Chrome maximized you can enjoy using a large sized “preview window”. Now that is nice! For those who may be curious you can see that ezLinkPreview works nicely with images too. Conclusion The ezLinkPreview extension provides a quick and simple way to preview links and/or images while you are browsing. If you are looking for similar functionality in Firefox then be sure to read our article on CoolPreviews here. Links Download the ezLinkPreview extension (Google Chrome Extensions) Similar Articles Productive Geek Tips Google Image Search Quick FixSubscribe to RSS Feeds in Chrome with a Single ClickFind a Website’s Actual Location with Chrome FlagsHow to Make Google Chrome Your Default BrowserEnable Auto-Paging Goodness in Google Chrome TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips DVDFab 6 Revo Uninstaller Pro Registry Mechanic 9 for Windows PC Tools Internet Security Suite 2010 QuicklyCode Provides Cheatsheets & Other Programming Stuff Download Free MP3s from Amazon Awe inspiring, inter-galactic theme (Win 7) Case Study – How to Optimize Popular Wordpress Sites Restore Hidden Updates in Windows 7 & Vista Iceland an Insurance Job?

    Read the article

  • links for 2011-02-08

    - by Bob Rhubart
    When It Comes to Data Integration, Oracle Is the Right Choice (tags: ping.fm) When It Comes to Data Integration, Oracle Is the Right Choice (tags: ping.fm) Webcast: Webcast: Deploy Oracle VM Templates for Oracle E-Business Suite and Oracle PeopleSoft Enterprise Applications. Feb 15. Event Date: 02/15/2011 9:00am PT / Noon ET. Featured Speakers: Adam Hawley (Oracle Senior Director, Product Management, Virtualization), Ivo Dujmovic (Oracle Director, Technology Integration), Greg Kelly (Oracle Product Strategy Manager - PeopleTools). (tags: oracle virtualization peoplesoft) Webcast: Managing Oracle Exadata with Oracle Enterprise Manager 11g Thursday, February 10, 2011 - 10 a.m. PT/1 p.m. ET. Ask Oracle experts questions and learn firsthand how to efficiently manage all stages of Oracle Exadata’s lifecycle, from testing to deployment. (tags: oracle exalogic enterprisemanager) Arthur Cole: Winning the Consolidated Data Center Future | ITBusinessEdge.com "According to InformationWeek, the amount of data under management is increasing by about 20 percent per year, with some organizations having to deal with 50 percent or more. That means capacity needs to double every two or three years." - Arthur Cole (tags: dataconsolidation enterprisearchitecture) Transformation of Product Management in Telecommunications for Rapid Launch of Next Generation Products (Telecommunications Architecture Corner) Raul Goycoolea's post examines "how enterprise product management enabled by PLM-based product catalogue solutions helps to launch next generation products rapidly in the context of the Telecommunication Industry." (tags: oracle otn enterprisearchitecture) Richard Veryard on Architecture: What is an EA vendor? "Even some people who insist that enterprise architecture shouldn't be thought of as merely software architecture seem to think that 'tools' only means 'software tools.'" - Richard Veryard (tags: enterprisearchitecture) MDM for Tax Authorities (Oracle Master Data Management) "Tax Authorities face a multitude of IT challenges," says David Butler. "Compounding these issues is the fact that the IT architectures in operation at most revenue and collections agencies are very complex." (tags: oracle otn MDM ITarchitecture) Bernard Golden: How Cloud Computing Changes IT Staffs | CIO.com | CIO.com "Enterprise architects become more important" tops Bernard's list of changes. (tags: cloudcomputing staffing cio enterprisearchitecture) Martijn Linssen: Social Enterprise Magic Quadrant "Revolutions usually go wrong, where evolutions usually go right." - Martijn Linssen (tags: socialcomputing enterprise2.0) Why Do IT Roles Fail? | CIO "The roles that come up most often are the ones that are not directly building or maintaining systems. These include architecture, planning, vendor management, relationship management, PMO, and security." - Marc Cecere (tags: softwarearchitecture technologyroles) We're Hiring! - Server and Desktop Virtualization Product Management (Oracle's Virtualization Blog) Adam Hawley with information on an opportunity for qualified job seekers. (tags: oracle otn employment virtualization)

    Read the article

  • Merge Join component sorted outputs [SSIS]

    - by jamiet
    One question that I have been asked a few times of late in regard to performance tuning SSIS data flows is this: Why isn’t the Merge Join output sorted (i.e.IsSorted=True)? This is a fair question. After all both of the Merge Join inputs are sorted, hence why wouldn’t the output be sorted as well? Well here’s a little secret, the Merge Join output IS sorted! There’s a caveat though – it is only under certain circumstances and SSIS itself doesn’t do a good job of informing you of it. Let’s take a look at an example. Here we have a dataflow that consumes data from the [AdventureWorks2008].[Sales].[SalesOrderHeader] & [AdventureWorks2008].[Sales].[SalesOrderDetail] tables then joins them using a Merge Join component: Let’s take a look inside the editor of the Merge Join: We are joining on the [SalesOrderId] field (which is what the two inputs just happen to be sorted upon). We are also putting [SalesOrderHeader].[SalesOrderId] into the output. Believe it or not the output from this Merge Join component is sorted (i.e. has IsSorted=True) but unfortunately the Merge Join component does not have an Advanced Editor hence it is hidden away from us. There are a couple of ways to prove to you that is the case; I could open up the package XML inside the .dtsx file and show you the metadata but there is an easier way than that – I can attach a Sort component to the output. Take a look: Notice that the Sort component is attempting to sort on the [SalesOrderId] column. This gives us the following warning: Validation warning. DFT Get raw data: {992B7C9A-35AD-47B9-A0B0-637F7DDF93EB}: The data is already sorted as specified so the transform can be removed. The warning proves that the output from the Merge Join is sorted! It must be noted that the Merge Join output will only have IsSorted=True if at least one of the join columns is included in the output. So there you go, the Merge Join component can indeed produce a sorted output and that’s very useful in order to avoid unnecessary expensive Sort operations downstream. Hope this is useful to someone out there! @Jamiet  P.S. Thank you to Bob Bojanic on the SSIS product team who pointed this out to me!

    Read the article

  • CRM vs VRM

    - by David Dorf
    In a previous post, I discussed the potential power of combining social, interest, and location graphs in order to personalize marketing and shopping experiences for consumers.  Marketing companies have been trying to collect detailed information for that very purpose, a large majority of which comes from tracking people on the internet.  But their approaches stem from the one-way nature of traditional advertising.  With TV, radio, and magazines there is no opportunity to truly connect to customers, which has trained marketing companies to [covertly] collect data and segment customers into easily identifiable groups.  To a large extent, we think of this as CRM. But what if we turned this viewpoint upside-down to accommodate for the two-way nature of social media?  The notion of marketing as conversations was the basis for the Cluetrain, an early attempt at drawing attention to the fact that customers are actually unique humans.  A more practical implementation is Project VRM, which is a reverse CRM of sorts.  Instead of vendors managing their relationships with customers, customers manage their relationships with vendors. Your shopping experience is not really controlled by you; rather, its controlled by the retailer and advertisers.  And unfortunately, they typically don't give you a say in the matter.  Yes, they might tailor the content for "female age 25-35 interested in shoes" but that's not really the essence of you, is it?  A better approach is to the let consumers volunteer information about themselves.  And why wouldn't they if it means a better, more relevant shopping experience?  I'd gladly list out my likes and dislikes in exchange for getting rid of all those annoying cookies on my harddrive. I really like this diagram from Beyond SocialCRM as it captures the differences between CRM and VRM. The closest thing to VRM I can find is Buyosphere, a start-up that allows consumers to track their shopping history across many vendors, then share it appropriately.  Also, Amazon does a pretty good job allowing its customers to edit their profile, which includes everything you've ever purchased from Amazon.  You can mark items as gifts, or explicitly exclude them from their recommendation engine.  This is a win-win for both the consumer and retailer. So here is my plea to retailers: Instead of trying to infer my interests from snapshots of my day, please just ask me.  We'll both have a better experience in the long-run.

    Read the article

  • Composite-like pattern and SRP violation

    - by jimmy_keen
    Recently I've noticed myself implementing pattern similar to the one described below. Starting with interface: public interface IUserProvider { User GetUser(UserData data); } GetUser method's pure job is to somehow return user (that would be an operation speaking in composite terms). There might be many implementations of IUserProvider, which all do the same thing - return user basing on input data. It doesn't really matter, as they are only leaves in composite terms and that's fairly simple. Now, my leaves are used by one own them all composite class, which at the moment follows this implementation: public interface IUserProviderComposite : IUserProvider { void RegisterProvider(Predicate<UserData> predicate, IUserProvider provider); } public class UserProviderComposite : IUserProviderComposite { public User GetUser(SomeUserData data) ... public void RegisterProvider(Predicate<UserData> predicate, IUserProvider provider) ... } Idea behind UserProviderComposite is simple. You register providers, and this class acts as a reusable entry-point. When calling GetUser, it will use whatever registered provider matches predicate for requested user data (if that helps, it stores key-value map of predicates and providers internally). Now, what confuses me is whether RegisterProvider method (brings to mind composite's add operation) should be a part of that class. It kind of expands its responsibilities from providing user to also managing providers collection. As far as my understanding goes, this violates Single Responsibility Principle... or am I wrong here? I thought about extracting register part into separate entity and inject it to the composite. As long as it looks decent on paper (in terms of SRP), it feels bit awkward because: I would be essentially injecting Dictionary (or other key-value map) ...or silly wrapper around it, doing nothing more than adding entires This won't be following composite anymore (as add won't be part of composite) What exactly is the presented pattern called? Composite felt natural to compare it with, but I realize it's not exactly the one however nothing else rings any bells. Which approach would you take - stick with SRP or stick with "composite"/pattern? Or is the design here flawed and given the problem this can be done in a better way?

    Read the article

  • Internships available in Oracle Netherlands - this summer

    - by jessica.ebbelaar
    I am Jannie Minnema, Director of Business Operations for Oracle in the Benelux. My career at Oracle started at Oracle Headquartes in San Francisco as a Project Manager, building Computer Based Training Products. After spending 3 years in Dubai, my husband and I moved to the USA as he wanted to study a MBA there. This move kick started my career as I was working in Silicon Valley during a time of great opportunity. After the USA, I fulfilled numerous roles at Oracle ranging from Project Management to Sales and Marketing. I currently work in the Netherlands were I am now Director of Business Operations for Oracle in the Benelux and a member of the Dutch Management Team. Business Operations advises the Benelux Management Team and focuses on topics such as Corporate Social Responsibility, Customer Satisfaction, Internal Communication, Internal training and effective usage of Sales Tools and Systems. We are currently also working on how best to introduce a “New way of Working”. The move to our new office building in 2011 aides in creating the right environment for this. Our goal is to continually improve the organisation. I enjoy working for Oracle because there is never a dull moment, and I am continuously challenged to improve. The environment that I work in changes constantly. Look at all the recent acquisitions; over 60 in the past 3 years! If you, as an Oracle employee, see something that can be done better, like a new service or tool, then combine it with some enthusiasm, motivate it further and the (Oracle) world changes! Internships This summer we have a number of Internships available, coordinated by the Business Operations team. We very much look forward to welcoming Students in our Dutch office. We look at it as an opportunity for both Oracle and the Interns to learn from each other. It will definitely result in both parties improving, growing and achieving results! We offer Internships related to Sales, Marketing and New Technology. You can find the assignments here. During the Internship you will experience what is like to work for an international and dynamic company, where we work and play hard. Our customers are major Dutch companies and our employees are professionals that compare working at Oracle with playing a Soccer World Cup final. We offer several Internships at the same time, so you will learn and share your experiences with a group of fellow students. If you have any questions related to this article feel free to contact [email protected].  You can find our job opportunities via http://campus.oracle.com

    Read the article

  • Using lookahead assertions in regular expressions

    - by Greg Jackson
    I use regular expressions on a daily basis, as my daily work is 90% in Perl (legacy codebase, but that's a different issue). Despite this, I still find lookahead and lookbehind to be terribly confusing and often unreadable. Right now, if I were to get a code review with a lookahead or lookbehind, I would immediately send it back to see if the problem can be solved by using multiple regular expressions or a different approach. The following are the main reasons I tend not to like them: They can be terribly unreadable. Lookahead assertions, for example, start from the beginning of the string no matter where they are placed. That, among other things, can cause some very "interesting" and non-obvious behaviors. It used to be the case that many languages didn't support lookahead/lookbehind (or supported them as "experimental features"). This isn't the case quite as much, but there's still always the question as to how well it's supported. Quite frankly, they feel like a dirty hack. Regexps often already are, but they can also be quite elegant, and have gained widespread acceptance. I've gotten by without any need for them at all... sometimes I think that they're extraneous. Now, I'll freely admit that especially the last two reasons aren't really good ones, but I felt that I should enumerate what goes through my mind when I see one. I'm more than willing to change my mind about them, but I feel that they violate some of my core tenets of programming, including: Code should be as readable as possible without sacrificing functionality -- this may include doing something in a less efficient, but clearer was as long as the difference is negligible or unimportant to the application as a whole. Code should be maintainable -- if another programmer comes along to fix my code, non-obvious behavior can hide bugs or make functional code appear buggy (see readability) "The right tool for the right job" -- I'm sure you can come up with contrived examples that could use lookahead, but I've never come across something that really needs them in my real-world development work. Is there anything that they're really the best tool for, as opposed to, say, multiple regexps (or, alternatively, are they the best tool for most cases they're used for today). My question is this: Is it good practice to use lookahead/lookbehind in regular expressions, or are they simply a hack that have found their way into modern production code? I'd be perfectly happy to be convinced that I'm wrong about this, and simple examples are useful for examples or illustration, but by themselves, won't be enough to convince me.

    Read the article

  • How to create Custom ListForm WebPart

    - by DipeshBhanani
    Mostly all who works extensively on SharePoint (including meJ) don’t like to use out-of-box list forms (DispForm.aspx, EditForm.aspx, NewForm.aspx) as interface. Actually these OOB list forms bind hands of developers for the customization. It gives headache to developers to add just one post back event, for a dropdown field and to populate other fields in NewForm.aspx or EditForm.aspx. On top of that clients always ask such stuff. So here I am going to give you guys a flight for SharePoint Customization world. In this blog, I will explain, how to create CustomListForm WebPart. In my next blogs, I am going to explain easy deployment of List Forms through features and last, guidance on using SharePoint web controls. 1.       First thing, create a class library project through Visual Studio and inherit the class with WebPart class.     public class CustomListForm : WebPart   2.       Declare the public variables and properties which we are going to use throughout the class. You will get to know these once you see them in use.         #region "Variable Declaration"           Table spTableCntl;         FormToolBar formToolBar;         Literal ltAlertMessage;         Guid SiteId;         Guid ListId;         int ItemId;         string ListName;           #endregion           #region "Properties"           SPControlMode _ControlMode = SPControlMode.New;         [Personalizable(PersonalizationScope.Shared),          WebBrowsable(true),          WebDisplayName("Control Mode"),          WebDescription("Set Control Mode"),          DefaultValue(""),          Category("Miscellaneous")]         public SPControlMode ControlMode         {             get { return _ControlMode; }             set { _ControlMode = value; }         }           #endregion     The property “ControlMode” is used to identify the mode of the List Form. The property is of type SPControlMode which is an enum type with values (Display, Edit, New and Invalid). When we will add this WebPart to DispForm.aspx, EditForm.aspx and NewForm.aspx, we will set the WebPart property “ControlMode” to Display, Edit and New respectively.     3.       Now, we need to override the CreateChildControl method and write code to manually add SharePoint Web Controls related to each list fields as well as ToolBar controls.         protected override void CreateChildControls()         {             base.CreateChildControls();               try             {                 SiteId = SPContext.Current.Site.ID;                 ListId = SPContext.Current.ListId;                 ListName = SPContext.Current.List.Title;                   if (_ControlMode == SPControlMode.Display || _ControlMode == SPControlMode.Edit)                     ItemId = SPContext.Current.ItemId;                   SPSecurity.RunWithElevatedPrivileges(delegate()                 {                     using (SPSite site = new SPSite(SiteId))                     {                         //creating a new SPSite with credentials of System Account                         using (SPWeb web = site.OpenWeb())                         {                               //<Custom Code for creating form controls>                         }                     }                 });             }             catch (Exception ex)             {                 ShowError(ex, "CreateChildControls");             }         }   Here we are assuming that we are developing this WebPart to plug into List Forms. Hence we will get the List Id and List Name from the current context. We can have Item Id only in case of Display and Edit Mode. We are putting our code into “RunWithElevatedPrivileges” to elevate privileges to System Account. Now, let’s get deep down into the main code and expand “//<Custom Code for creating form controls>”. Before initiating any SharePoint control, we need to set context of SharePoint web controls explicitly so that it will be instantiated with elevated System Account user. Following line does the job.     //To create SharePoint controls with new web object and System Account credentials     SPControl.SetContextWeb(Context, web);   First thing, let’s add main table as container for all controls.     //Table to render webpart     Table spTableMain = new Table();     spTableMain.CellPadding = 0;     spTableMain.CellSpacing = 0;     spTableMain.Width = new Unit(100, UnitType.Percentage);     this.Controls.Add(spTableMain);   Now we need to add Top toolbar with Save and Cancel button at top as you see in the below screen shot.       // Add Row and Cell for Top ToolBar     TableRow spRowTopToolBar = new TableRow();     spTableMain.Rows.Add(spRowTopToolBar);     TableCell spCellTopToolBar = new TableCell();     spRowTopToolBar.Cells.Add(spCellTopToolBar);     spCellTopToolBar.Width = new Unit(100, UnitType.Percentage);         ToolBar toolBarTop = (ToolBar)Page.LoadControl("/_controltemplates/ToolBar.ascx");     toolBarTop.CssClass = "ms-formtoolbar";     toolBarTop.ID = "toolBarTbltop";     toolBarTop.RightButtons.SeparatorHtml = "<td class=ms-separator> </td>";       if (_ControlMode != SPControlMode.Display)     {         SaveButton btnSave = new SaveButton();         btnSave.ControlMode = _ControlMode;         btnSave.ListId = ListId;           if (_ControlMode == SPControlMode.New)             btnSave.RenderContext = SPContext.GetContext(web);         else         {             btnSave.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);             btnSave.ItemId = ItemId;         }         toolBarTop.RightButtons.Controls.Add(btnSave);     }       GoBackButton goBackButtonTop = new GoBackButton();     toolBarTop.RightButtons.Controls.Add(goBackButtonTop);     goBackButtonTop.ControlMode = SPControlMode.Display;       spCellTopToolBar.Controls.Add(toolBarTop);   Here we have use “SaveButton” and “GoBackButton” which are internal SharePoint web controls for save and cancel functionality. I have set some of the properties of Save Button with if-else condition because we will not have Item Id in case of New Mode. Item Id property is used to identify which SharePoint List Item need to be saved. Now, add Form Toolbar to the page which contains “Attach File”, “Delete Item” etc buttons.       // Add Row and Cell for FormToolBar     TableRow spRowFormToolBar = new TableRow();     spTableMain.Rows.Add(spRowFormToolBar);     TableCell spCellFormToolBar = new TableCell();     spRowFormToolBar.Cells.Add(spCellFormToolBar);     spCellFormToolBar.Width = new Unit(100, UnitType.Percentage);       FormToolBar formToolBar = new FormToolBar();     formToolBar.ID = "formToolBar";     formToolBar.ListId = ListId;     if (_ControlMode == SPControlMode.New)         formToolBar.RenderContext = SPContext.GetContext(web);     else     {         formToolBar.RenderContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemContext = SPContext.GetContext(this.Context, ItemId, ListId, web);         formToolBar.ItemId = ItemId;     }     formToolBar.ControlMode = _ControlMode;     formToolBar.EnableViewState = true;       spCellFormToolBar.Controls.Add(formToolBar);     The ControlMode property will take care of which button to be displayed on the toolbar. E.g. “Attach files”, “Delete Item” in new/edit forms and “New Item”, “Edit Item”, “Delete Item”, “Manage Permissions” etc in display forms. Now add main section which contains form field controls.     //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       //Add a Blank Row with height of 5px to render space between ToolBar table and Control table     TableRow spRowLine1 = new TableRow();     spTableMain.Rows.Add(spRowLine1);     TableCell spCellLine1 = new TableCell();     spRowLine1.Cells.Add(spCellLine1);     spCellLine1.Height = new Unit(5, UnitType.Pixel);     spCellLine1.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       //Add Row and Cell for Form Controls Section     TableRow spRowCntl = new TableRow();     spTableMain.Rows.Add(spRowCntl);     TableCell spCellCntl = new TableCell();       //Create Form Field controls and add them in Table "spCellCntl"     CreateFieldControls(web);     //Add public variable "spCellCntl" containing all form controls to the page     spRowCntl.Cells.Add(spCellCntl);     spCellCntl.Width = new Unit(100, UnitType.Percentage);     spCellCntl.Controls.Add(spTableCntl);       TableRow spRowLine2 = new TableRow();     spTableMain.Rows.Add(spRowLine2);     TableCell spCellLine2 = new TableCell();     spRowLine2.Cells.Add(spCellLine2);     spCellLine2.CssClass = "ms-formline";     spCellLine2.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));       // Add Blank row with height of 5 pixel     TableRow spRowLine3 = new TableRow();     spTableMain.Rows.Add(spRowLine3);     TableCell spCellLine3 = new TableCell();     spRowLine3.Cells.Add(spCellLine3);     spCellLine3.Height = new Unit(5, UnitType.Pixel);     spCellLine3.Controls.Add(new LiteralControl("<IMG SRC='/_layouts/images/blank.gif' width=1 height=1 alt=''>"));   You can add bottom toolbar also to get same look and feel as OOB forms. I am not adding here as the blog will be much lengthy. At last, you need to write following lines to allow unsafe updates for Save and Delete button.     // Allow unsafe update on web for save button and delete button     if (this.Page.IsPostBack && this.Page.Request["__EventTarget"] != null         && (this.Page.Request["__EventTarget"].Contains("IOSaveItem")         || this.Page.Request["__EventTarget"].Contains("IODeleteItem")))     {         SPContext.Current.Web.AllowUnsafeUpdates = true;     }   So that’s all. We have finished writing Custom Code for adding field control. But something most important is skipped. In above code, I have called function “CreateFieldControls(web);” to add SharePoint field controls to the page. Let’s see the implementation of the function:     private void CreateFieldControls(SPWeb pWeb)     {         SPList listMain = pWeb.Lists[ListId];         SPFieldCollection fields = listMain.Fields;           //Main Table to render all fields         spTableCntl = new Table();         spTableCntl.BorderWidth = new Unit(0);         spTableCntl.CellPadding = 0;         spTableCntl.CellSpacing = 0;         spTableCntl.Width = new Unit(100, UnitType.Percentage);         spTableCntl.CssClass = "ms-formtable";           SPContext controlContext = SPContext.GetContext(this.Context, ItemId, ListId, pWeb);           foreach (SPField listField in fields)         {             string fieldDisplayName = listField.Title;             string fieldInternalName = listField.InternalName;               //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;               FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }               //Add row to display a field row             TableRow spCntlRow = new TableRow();             spTableCntl.Rows.Add(spCntlRow);               //Add the cells for containing field lable and control             TableCell spCellLabel = new TableCell();             spCellLabel.Width = new Unit(30, UnitType.Percentage);             spCellLabel.CssClass = "ms-formlabel";             spCntlRow.Cells.Add(spCellLabel);             TableCell spCellControl = new TableCell();             spCellControl.Width = new Unit(70, UnitType.Percentage);             spCellControl.CssClass = "ms-formbody";             spCntlRow.Cells.Add(spCellControl);               //Add the control to the table cells             spCellLabel.Controls.Add(fieldLabel);             spCellControl.Controls.Add(fieldControl);               //Add description if there is any in case of New and Edit Mode             if (_ControlMode != SPControlMode.Display && listField.Description != string.Empty)             {                 FieldDescription fieldDesc = new FieldDescription();                 fieldDesc.FieldName = fieldInternalName;                 fieldDesc.ListId = ListId;                 spCellControl.Controls.Add(fieldDesc);             }               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }         }         fields = null;     }   First of all, I have declared List object and got list fields in field collection object called “fields”. Then I have added a table for the container of all controls and assign CSS class as "ms-formtable" so that it gives consistent look and feel of SharePoint. Now it’s time to navigate through all fields and add them if required. Here we don’t need to add hidden or system fields. We also don’t want to display read-only fields in new and edit forms. Following lines does this job.             //Skip if the field is system field or hidden             if (listField.Hidden || listField.ShowInVersionHistory == false)                 continue;               //Skip if the control mode is display and field is read-only             if (_ControlMode != SPControlMode.Display && listField.ReadOnlyField == true)                 continue;   Let’s move to the next line of code.             FieldLabel fieldLabel = new FieldLabel();             fieldLabel.FieldName = listField.InternalName;             fieldLabel.ListId = ListId;               BaseFieldControl fieldControl = listField.FieldRenderingControl;             fieldControl.ListId = ListId;             //Assign unique id using Field Internal Name             fieldControl.ID = string.Format("Field_{0}", fieldInternalName);             fieldControl.EnableViewState = true;               //Assign control mode             fieldLabel.ControlMode = _ControlMode;             fieldControl.ControlMode = _ControlMode;   We have used “FieldLabel” control for displaying field title. The advantage of using Field Label is, SharePoint automatically adds red star besides field label to identify it as mandatory field if there is any. Here is most important part to understand. The “BaseFieldControl”. It will render the respective web controls according to type of the field. For example, if it’s single line of text, then Textbox, if it’s look up then it renders dropdown. Additionally, the “ControlMode” property tells compiler that which mode (display/edit/new) controls need to be rendered with. In display mode, it will render label with field value. In edit mode, it will render respective control with item value and in new mode it will render respective control with empty value. Please note that, it’s not always the case when dropdown field will be rendered for Lookup field or Choice field. You need to understand which controls are rendered for which list fields. I am planning to write a separate blog which I hope to publish it very soon. Moreover, we also need to assign list field specific properties like List Id, Field Name etc to identify which SharePoint List field is attached with the control.             switch (_ControlMode)             {                 case SPControlMode.New:                     fieldLabel.RenderContext = SPContext.GetContext(pWeb);                     fieldControl.RenderContext = SPContext.GetContext(pWeb);                     break;                 case SPControlMode.Edit:                 case SPControlMode.Display:                     fieldLabel.RenderContext = controlContext;                     fieldLabel.ItemContext = controlContext;                     fieldLabel.ItemId = ItemId;                       fieldControl.RenderContext = controlContext;                     fieldControl.ItemContext = controlContext;                     fieldControl.ItemId = ItemId;                     break;             }   Here, I have separate code for new mode and Edit/Display mode because we will not have Item Id to assign in New Mode. We also need to set CSS class for cell containing Label and Controls so that those controls get rendered with SharePoint theme.             spCellLabel.CssClass = "ms-formlabel";             spCellControl.CssClass = "ms-formbody";   “FieldDescription” control is used to add field description if there is any.    Now it’s time to add some more customization,               //Disable Name(Title) in Edit Mode             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Name")             {                 TextBox txtTitlefield = (TextBox)fieldControl.Controls[0].FindControl("TextField");                 txtTitlefield.Enabled = false;             }   The above code will disable the title field in edit mode. You can add more code here to achieve more customization according to your requirement. Some of the examples are as follow:             //Adding post back event on UserField to auto populate some other dependent field             //in new mode and disable it in edit mode             if (_ControlMode != SPControlMode.Display && fieldDisplayName == "Manager")             {                 if (fieldControl.Controls[0].FindControl("UserField") != null)                 {                     PeopleEditor pplEditor = (PeopleEditor)fieldControl.Controls[0].FindControl("UserField");                     if (_ControlMode == SPControlMode.New)                         pplEditor.AutoPostBack = true;                     else                         pplEditor.Enabled = false;                 }             }               //Add JavaScript Event on Dropdown field. Don't forget to add the JavaScript function on the page.             if (_ControlMode == SPControlMode.Edit && fieldDisplayName == "Designation")             {                 DropDownList ddlCategory = (DropDownList)fieldControl.Controls[0];                 ddlCategory.Attributes.Add("onchange", string.Format("javascript:DropdownChangeEvent('{0}');return false;", ddlCategory.ClientID));             }    Following are the screenshots of my Custom ListForm WebPart. Let’s play a game, check out your OOB List forms of SharePoint, compare with these screens and find out differences.   DispForm.aspx:   EditForm.aspx:   NewForm.aspx:   Enjoy the SharePoint Soup!!! ­­­­­­­­­­­­­­­­­­­­

    Read the article

  • Book review: Peopleware: Productive Projects and Teams

    - by DigiMortal
       Peopleware by Tom DeMarco and Timothy Lister is golden classic book that can be considered as mandatory reading for software project managers, team leads, higher level management and board members of software companies. If you make decisions about people then you cannot miss this book. If you are already good on managing developers then this book can make you even better – you will learn new stuff about successful development teams for sure. Why peopleware? Peopleware gives you very good hints about how to build up working environment for project teams where people can really do their work. Book also covers team building topics that are also important reading. As software developer I found practically all points in this book to be accurate and valid. Many times I have found my self thinking about same things and Peopleware made me more confident about my opinions. Peopleware covers also time management and planning topics that help you do way better job on using developers time effectively by minimizing the amount of interruptions by phone calls, pointless meetings and i-want-to-know-what-are-you-doing-right-now questions by managers who doesn’t write code anyway. I think if you follow suggestions given by Peopleware your developers are very happy. I suggest you to also read another great book – Death March by Edward Yourdon. Death March describes you effectively what happens when good advices given by Peopleware are totally ignored or worse yet – people are treated exactly opposite way. I consider also Death March as golden classics and I strongly recommend you to read this book too. Table of Contents Acknowledgments Preface to the Second Edition Preface to the First Edition Part 1: Managing the Human Resource Chapter 1: Somewhere Today, a Project Is Failing Chapter 2: Make a Cheeseburger, Sell a Cheeseburger Chapter 3: Vienna Waits for You Chapter 4: Quality-If Time Permits Chapter 5: Parkinson's Law Revisited Chapter 6: Laetrile Part II: The Office Environment Chapter 7: The Furniture Police Chapter 8: "You Never Get Anything Done Around Here Between 9 and 5" Chapter 9: Saving Money on Space Intermezzo: Productivity Measurement and Unidentified Flying Objects Chapter 10: Brain Time Versus Body Time Chapter 11: The Telephone Chapter 12: Bring Back the Door Chapter 13: Taking Umbrella Steps Part III: The Right People Chapter 14: The Hornblower Factor Chapter 15: Hiring a Juggler Chapter 16: Happy to Be Here Chapter 17: The Self-Healing System Part IV: Growing Productive Teams Chapter 18: The Whole Is Greater Than the Sum of the Parts Chapter 19: The Black Team Chapter 20: Teamicide Chapter 21: A Spaghetti Dinner Chapter 22: Open Kimono Chapter 23: Chemistry for Team Formation Part V: It't Supposed to Be Fun to Work Here Chapter 24: Chaos and Order Chapter 25: Free Electrons Chapter 26: Holgar Dansk Part VI: Son of Peopleware Chapter 27: Teamicide, Revisited Chapter 28: Competition Chapter 29: Process Improvement Programs Chapter 30: Making Change Possible Chapter 31: Human Capital Chapter 32:Organizational Learning Chapter 33: The Ultimate Management Sin Is Chapter 34: The Making of Community Notes Bibliography Index About the Authors

    Read the article

  • Cannot boot into system after deleting partition

    - by Clayton
    Okay...so this was kind of a stupid thing for me to do now that I think back on it. I was experiencing a ton of lag and not as much memory that I could use after installing Ubuntu 12.04. So after remembering I had installed multiple server versions of Ubuntu 12.04 by mistake, I went into Disk Management and proceeded to delete each and every one. Everything went fine. Up until this week, I have not experienced any problems. But starting yesterday I began to get lag just as I had before, and nothing fixed the problem. I decided to remove the Ubuntu partition, since I was also experiencing a visual error when given the option to select one to boot(the screen doesn't come up at all, and I recieved a monitor resolution error instead, but could still access both Windows and Ubuntu via arrow keys). After deleting the Ubuntu partition, so that I could see if running just Windows would fix the problem, I proceeded with what I was doing, installing a few programs that were not tied to my prediciment in any way. Upon rebooting my desktop, however, I recieved the following error: error: unknown filesystem. grub rescue> Hoping I could boot into Ubuntu via a pendrive and possibly backup my important files and wipe the hard drive to start fresh, I installed Ubuntu 13.04, but even that does not boot. Instead, I get this message on a terminal screen: SYSLINUX 4.06. EDD 4.06-pre1 Copyright (C) 1994-2011 H. Peter Anvin et al ERROR: No configuration file found No DEFAULT or UI configuration directive found! boot: So more or less, my desktop is screwed. I need to be able to get to the files inside because of my job as an artist, as well as retrieve my documents for my stories stored on Windows. Once I can succeed in solving this once and for all, I know for a fact I will stick to Ubuntu only, and install what is required to be able to run any Windows applications I used to use or need to use. I would rather not reformat the hard drive, and if I need to, it is a last resort. And I doubt I can use a Windows Recovery Disk to get my files back, as my mom has thrown out a lot of the installation disks and paperwork I would need to even follow through with that. :\ Keep in mind that I am a novice/newbie when it comes to Linux, but am hoping ot become better at it as time goes by. I appreciate any help you guys can give me. This will probably be the last time I attempt to do anything that could risk the well-being of my PC. (I've also looked through various questions on the site and tested a lot of the solutions. None seem to have worked.)

    Read the article

  • SharePoint Navigation - Setting the Audience Property of an SPNavigationNode

    - by jhallal
    In this tip/trick i will demonstrate a way of setting by code the target audience of an SP navigation node.You might say setting this property in SharePoint UI is quite simple, but if you want to set it by code you will not find any straight forward property or method out of the box in SharePoint Object Model that does the job. The SPNavigationNode class has a property called “Properties”, which allows us to add custom properties to the node. One of the Property is “Audience”, which is used for setting the target audience on the navigation node. using (SPSite site = new SPSite("URL of your SharePoint Site")) {                using (SPWeb web = site.OpenWeb())         {             SPNavigationNode navNode = web.Navigation.GlobalNodes[0];              if (navNode.Properties.Contains("Audience"))             {                 navNode.Properties["Audience"] = "Users or Groups";             }             else             {                 navNode.Properties.Add("Audience", "Users or Groups");             }             navNode.Update();         } }

    Read the article

  • Cannot get Virtualbox to install properly on Ubuntu 12.04

    - by lopac1029
    I cannot get Virtualbox to install properly on my 12.04. I first went with a manual install for the .deb from the old builds section of the Virtualbox page. That .deb opened up the Software Center and installed. Then I got the error coming up of VT-x/AMD-V hardware acceleration is not available on your system. Your 64-bit guest will fail to detect a 64-bit CPU and will not be able to boot. which I can only assume was due to my Ubuntu version being 32-bit (System Details - Overview - OC type: 32-bit, right?) So I followed these instructions to remove the .deb manually, restarted my laptop, and then FOUND the actual Virtualbox install in the Software Center and installed from that (assuming it would give me the correct version I need for my system) So after all that (and then some), I'm still getting the same error when I connect to my new job's project in Virtualbox. Can anyone point me in the right direction of what to do here? This is the first time I've ever worked with Virtualbox, and no one at this company is using Ubuntu, so I'm on my own here. EDIT: Here is the direct info from running the 2 suggested commands Inspiron-1750-brick:~ $lscpu Architecture: i686 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 2 On-line CPU(s) list: 0,1 Thread(s) per core: 1 Core(s) per socket: 2 Socket(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 23 Stepping: 10 CPU MHz: 2100.000 BogoMIPS: 4189.45 L1d cache: 32K L1i cache: 32K L2 cache: 2048K Inspiron-1750-brick:~ $cat /proc/cpuinfo processor : 0 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T6500 @ 2.10GHz stepping : 10 microcode : 0xa07 cpu MHz : 1200.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 0 cpu cores : 2 apicid : 0 initial apicid : 0 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dtherm bogomips : 4189.80 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management: processor : 1 vendor_id : GenuineIntel cpu family : 6 model : 23 model name : Intel(R) Core(TM)2 Duo CPU T6500 @ 2.10GHz stepping : 10 microcode : 0xa07 cpu MHz : 1200.000 cache size : 2048 KB physical id : 0 siblings : 2 core id : 1 cpu cores : 2 apicid : 1 initial apicid : 1 fdiv_bug : no hlt_bug : no f00f_bug : no coma_bug : no fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe nx lm constant_tsc arch_perfmon pebs bts aperfmperf pni dtes64 monitor ds_cpl est tm2 ssse3 cx16 xtpr pdcm sse4_1 xsave lahf_lm dtherm bogomips : 4189.45 clflush size : 64 cache_alignment : 64 address sizes : 36 bits physical, 48 bits virtual power management:

    Read the article

< Previous Page | 216 217 218 219 220 221 222 223 224 225 226 227  | Next Page >