Search Results

Search found 10384 results on 416 pages for 'plan cache'.

Page 336/416 | < Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >

  • SQLAuthority News – Random Thoughts and Random Ideas

    - by pinaldave
    There are days when I keep on wondering about SQL, and even my life overall. Today is Saturday so I decided to write about SQL Server. Just like any other mornings, I woke up at 5 and opened my blog editor. I usually do not open Twitter or Facebook when I am planning to focus on my work, as they are little distractions for me. But today I opened my Twitter account and came across a very interesting quote from a friend: ‘Can I expect you to be different today?’ Well, I think it was very powerful quote for me to read first thing on a new day. This quote froze me for a while and made me think, “Do I really want to write about an SQL Server tip, or something different?”  After a little thinking, I’ve realized that for today I would go on and write something different. I am going to write about a few of the ideas and thoughts I had yesterday. After writing all these, I realized that if I am thinking so much in a day, and if I write a blog post of my random musing of the week or month, it can be so long (and boring). Here are some of my random thoughts I’d like to share with you: When the airplane lands, why does everybody get up and try to rush out when their luggage would be coming probably 20-30 minutes later? I really do not like this question when it was asked to me: “SQL Server is not using optimal index which I just created – how can I force it?” I am not going elaborate on this statement but you are allowed to in the comment section. Why do some people wish Good Morning even when they meet us after 4 PM? Can I optimize a query so much that it gives me a result before I execute it? Is it corruption when someone does their personal household work at office? The lane where I drive is always the slowest lane. Why waste time on correcting others when there are a lot of pending improvements for ourselves? If I have to get Tattoo, which SQL Server Execution Plan symbol should I get? Why do I reach office so early that the coffee machine is yet running its daily cleaning job? Why does every laptop have a ‘Page Up’ key at different locations on the keyboard? While I like color movies, I really appreciate black and white photographs. I do not appreciate statements like, “If I receive your books in PDF, I will spread it to many people to give you much greater exposure. So would you please send them to me ASAP?” Do not tell me, “Why does the database grow back after shrinking it every day?” I suggest you use “Search this blog” for the explanation. Petrol prices are currently at INR 74. I hope the rate remains there. Let me ask you the same question which started my day today:  “Can I expect you to be different today?” Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Select the Most Optimal Backup Methods for Server

    - by pinaldave
    Backup and Restore are very interesting concepts and one should be very much with the concept if you are dealing with production database. One never knows when a natural disaster or user error will surface and the first thing everybody wants is to get back on point in time when things were all fine. Well, in this article I have attempted to answer a few of the common questions related to Backup methodology. How to Select a SQL Server Backup Type In order to select a proper SQL Server backup type, a SQL Server administrator needs to understand the difference between the major backup types clearly. Since a picture is worth a thousand words, let me offer it to you below. Select a Recovery Model First The very first question that you should ask yourself is: Can I afford to lose at least a little (15 min, 1 hour, 1 day) worth of data? Resist the temptation to save it all as it comes with the overhead – majority of businesses outside finances can actually afford to lose a bit of data. If your answer is YES, I can afford to lose some data – select a SIMPLE (default) recovery model in the properties of your database, otherwise you need to select a FULL recovery model. The additional advantage of the Full recovery model is that it allows you to restore the data to a specific point in time vs to only last backup time in the Simple recovery model, but it exceeds the scope of this article Backups in SIMPLE Recovery Model In SIMPLE recovery model you can select to do just Full backups or Full + Differential. Full Backup This is the simplest type of backup that contains all information needed to restore the database and should be your first choice. It is often sufficient for small databases, but note that it makes a big impact on the performance of your database Full + Differential Backup After Full, Differential backup picks up all of the changes since the last Full backup. This means if you made Full, Diff, Diff backup – the last Diff backup contains all of the changes and you don’t need the previous Differential backup. Differential backup is obviously smaller and carries less performance overhead Backups in FULL Recovery Model In FULL recovery model you can select Full + Transaction Log or Full + Differential + Transaction Log backup. You have to create Transaction Log backup, because at that time the log is being truncated. Otherwise your Transaction Log will grow uncontrollably. Full + Transaction Log Backup You would always need to perform a Full backup first. Then a series of Transaction log backup. Note that (in contrast to Differential) you need ALL transactions to log since the last Full of Diff backup to properly restore. Transaction log backups have the smallest performance overhead and can be performed often. Full + Differential + Transaction Log Backup If you want to ease the performance overhead on your server, you can replace some of the Full backup in the previous scenario with Differential. You restore scenario would start from Full, then the Last Differential, then all of the remaining transactions log backups Typical backup Scenarios You may say “Well, it is all nice – give me the examples now”. As you may already know, my favorite SQL backup software is SQLBackupAndFTP. If you go to Advanced Backup Schedule form in this program and click “Load a typical backup plan…” link, it will give you these scenarios that I think are quite common – see the image below. The Simplest Way to Schedule SQL Backups I hate to repeat myself, but backup scheduling in SQL agent leaves a lot to be desired. I do not know the simple way to schedule your SQL server backups than in SQLBackupAndFTP – see the image below. The whole backup scheduling with compression, encryption and upload to a Network Folder / HDD / NAS Drive / FTP / Dropbox / Google Drive / Amazon S3 takes just a few minutes – see my previous post for the review. Final Words This post offered an explanation for major backup types only. For more complicated scenarios or to research other options as usually go to MSDN. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Upgrades in 5 Easy Pieces

    - by Anne R.
    Even though there are a few select tasks that I have to do once or twice a year, I can’t remember how to do them! Or where to find the bits and pieces to complete the task. So I love it when someone consolidates everything under one spot. That’s what the CRM On Demand team has done with the upgrade information. Specifically, they have: Provided a “one-stop” area for managing upgrades at your company. Broken down the upgrade process into 5 (yes, 5) steps. Explained when and how to perform each step with dates specific to your pod. Included details about each step, visible by expanding the step. Translated the steps into 11 languages. Added a list of release-specific resources with links from the page. Now, just head for the Training and Support portal, click the Release Info tab, and walk through the “5 Essential Steps to a Successful Upgrade.” Before you continue, though, select your language from the drop-down list on the Release Info page. CRM On Demand now has the upgrade steps translated into 11 languages. On the Step page, you can expand each section in sequence and follow the more detailed instructions that appear. This will ensure that you’ve covered all your bases for each upgrade. Here’s a shortened version of the information that you’ll find: 1. Verify your Primary Contact Information. Have you checked your primary contact information to make sure you’re being notified of all upgrade information? Or do you want more users to receive upgrade announcements? This section provides you with the navigation path to do that in CRM On Demand. 2. Review your Key Upgrade Dates. If you expand this step, a nice table appears with your critical dates for the various milestones. IMPORTANT: When your CRM On Demand pod has been officially added to the upgrade schedule, closer to the release date itself, this table will display your specific timetable. 3. Migrate your Customizations from the Staging Environment before the Snapshot Date. Oracle refreshes the Staging data with a copy of your Production data made on the Production Snapshot Date. So this section lists considerations relevant to this step. It also reminds you of the 2-week period when you should not be making any changes in your Staging environment.   4. Conduct your Upgrade Validation on the Staging Environment. When the Customer Validation Testing period begins, you need to log in to your Staging Environment to validate that your key business processes and customizations continue to behave as expected. If your company utilizes Web Services, Web Links, Web Applets or Workflow, focus on testing these first. You generally have about two weeks for testing. If you run into problems during this time, follow the instructions shown in this section for logging a service request. It describes exactly how to fill out the fields in the SR for the fastest resolution. 5. Conduct "White Glove" Testing in your Upgraded Production Environment. Before users start using the upgrade, you should access a few tabs and reports. Doing this actually warms up the cache so that frequently used pages and reports will come up at normal speed on Monday morning, when users log in to the upgraded system. Resources listed under this step help you in further preparing for the upgrade. Now there’s also a new Documentation section on the right with links to these release-specific resources.   Very nice, I commented, when discussing these improvements with the “responsible party.” She confirmed that, yes, they tried to consolidate the upgrade information, translate it for better communication, simplify it into 5 easy pieces, and drive admins responsible for handling upgrades to this one site instead of sending out elaborate emails. Yes, I just love it when someone practically reaches out and holds my hand through a process. Next best thing to a wizard!

    Read the article

  • Windows 8 Camp&ndash;Ways to Prepare

    - by Lori Lalonde
    When Windows 8 was announced at the BUILD conference back in September, it created quite a buzz among the developer community. By the spring of 2012,  Windows 8 Developer Camps started popping up everywhere imaginable. I received a lot of questions from CTTDNUG members about whether or not we would be hosting one locally. If you recall my post about the Windows Phone/Azure Developer Workshop that CTTDNUG hosted back in March, you’ll remember that the biggest hurdle to overcome when planning this type of event was finding the right venue. It took some time, but I finally found a venue that was available and provided the prerequisites needed to ensure this camp is a success. I am very excited that CTTDNUG will be hosting a Windows 8 Camp this summer in the Kitchener/Waterloo area. In fact, it’s coming up in less than 2 weeks. Clearly other developers are excited as well, because our registration numbers show that the event is already 70% full! On top of that, I was fortunate enough to also book two well-known evangelists to present and teach at this full day developer camp: Andrei Marukovich and Atley Hunter. This was the icing on the cake. With the content provided by Microsoft, and two local experts that live and breathe Windows 8 development, I know that I, along with other developers that attend this event, will have the opportunity to maximize our learning potential and hit the ground running. If you plan on attending a Windows 8 Developer Camp soon, and want to ensure you get the most “bang for your buck” (figuratively speaking, since these camps are free), there are some things you can do to prepare before the big day: 1) Install the prerequisites on your own device before the big day I can’t stress this enough. Otherwise, you will be spending valuable time during the hands-on period downloading and installing what is needed, rather than digging into the development and using that time to ask the experts on-hand about programming challenges, issues, questions you may have with respect to your development. Prerequisites: Windows 8 Release Preview Visual Studio 2012 RC Download the Windows 8 SDK Samples 2) Purchase, download, and read Charles Petzold’s newest book:  Programming Windows 6th Edition This is a great introduction to the type of content you will be learning about during the camp. Doing some light reading beforehand might raise some questions about the concepts discussed in the book, which will give you the opportunity to write them down and bring them with you to the camp. The experts on hand will be able to answer them for you. 3) Make use of the freebies that are available Telerik has recently released a preview of their RadControls for Metro. You can sign up to receive a license code to give you access to install the preview for free and start playing around with it. Syncfusion also offers a free download of their Metro Studio package, which is a collection of metro style icons that you can customize and use in your own applications. Last but not least, once you’ve installed the Windows 8 Release Preview on your own device, go to the Windows 8 Store and download a handful of the free apps that are available. Testing out other Metro apps may give you ideas of what you can do in your own apps and analyze what features you like: application flow, type of animations used, concepts that were leveraged, how live tiles were used, etc. I hope you found these tips to be useful as you embark on a new development journey! Although this post focused on how to prepare for a Windows 8 camp, the same ideas are there whichever developer camp/workshop/event you attend. Learning does not begin and end on the day of the event. Attending a developer camp is just one step of many to master whatever technology you are interested in. It is a continuous process, which is fully maximized when you do your homework beforehand, actively participate during,  and follow up by putting what you learned to practice afterwards. Happy coding!

    Read the article

  • SEO non-English domain name advice

    - by Dominykas Mostauskis
    I'm starting a website, that is meant for a non-English region, using an alphabet that is a bit different than that of English. Current plan is as follows. The website name, and the domain name, will be in the local language (not English); however, domain name will be spelled in the English alphabet, while the website's title will be the same word(s), but spelled properly with accents. E.g.: 'www.litterat.fr' and 'Littérat'. Does the difference between domain name and website name character use influence the site's SEO? Is it better, SEO-wise, to choose a name that can be spelled the same way in the English alphabet? From my experience, when searching online, invariably, the English alphabet is used, no matter the language, so people will still be searching 'litterat' (without accents and such). Edit: To sum up: Things have been said about IDN (Internationalized domain name). To make it simple, they are second-level domain names that contain language specific characters (LSP)(e.g. www.café.fr). Here you can check what top-level domains support what LSPs. Check initall's answer for more info on using LSPs in paths and queries. To answer my question about how and if search engines relate keywords spelled with and without language specific characters: Google can potentially tell that series and séries is the same keyword. However, (most relevant for words that are spelled differently across languages and have different meanings, like séries), for Google to make the connection (or lack thereof) between e and é, it has to deduce two things: Language that you are searching in. Language of your query. You can specify it manually through Advanced search or it guesses it, sometimes. I presume it can guess it wrong too. The more keywords specific to this language you use the higher Google's chance to guess the language. Language of the crawled document, against which the ASCII version of the word will be compared (in this example – series). Again, check initall's answer for how to help Google in understanding what language your document is in. Once it has that it can tell whether or not these two spellings should be treated as the same keyword. Google has to understand that even though you're not using french (in this example) specific characters, you're searching in French. The reason why I used the french word séries in this example, is that it demonstrates this very well. You have it in French and you have it in English without the accent. So if your search query is ambiguous like our series, unless Google has something more to go on, it will presume that there's no correlation between your search and séries in French documents. If you augment your query to series romantiques (try it), Google will understand that you're searching in French and among your results you'll see séries as well. But this does not always work, you should test it out with your keywords first. For example, if you search series francaises, it will associate francaises with françaises, but it will not associate series with séries. It depends on the words. Note: worth stressing that this problem is very relevant to words that, written in plain ASCII, might have some other meanings in other languages, it is less relevant to words that can be, by a distinct margin, just some one language. Tip: I've noticed that sometimes even if my non-accented search query doesn't get associated with the properly spelled word in a document (especially if it's the title or an important keyword in the doc), it still comes up. I followed the link, did a Ctrl-F search for my non-accented search query and found nothing, then checked the meta-tags in the source and you had the page's title in both accented and non-accented forms. So if you have meta-tags that can be spelled with language specific characters and without, put in both. Footnote: I hope this helps. If you have anything to add or correct, go ahead.

    Read the article

  • Partner Blog: Hub City Media Introduces iPad Application for Oracle Identity Analytics

    - by Tanu Sood
    About the Writer:Steve Giovannetti is CTO of Hub City Media, Inc., a company that specializes in implementation and product development on the Oracle Identity Management platform. Recently, Hub City Media announced the introduction of iPad application IdentityCert for Oracle Identity Analytics. This post explore the business use cases and application of IdentityCert.Hub City Media(HCM) has been deploying certification solutions based on Oracle Identity Analytics since it first appeared on the market as Vaau RBACx. With each deployment we've seen the same pattern repeat time and time again:1. Customers suffering under the weight of manual access certification regimens deploy Oracle Identity Analytics (OIA) for automated certification. 2. OIA improves the frequency, speed, accuracy, and participation of certifications across the organization. 3. Then the certifiers, typically managers and supervisors, ask, “Is there any easier way to do these certifications offline?”The current version of OIA has a way to export certification data to a spreadsheet.  For some customers, we've leveraged this feature and combined it with some of our own custom code to provide a solution based on spreadsheet exports and imports.  Customers export the certification to Microsoft Excel, complete it, and then import the spreadsheet to OIA. It worked well for offline certification, but if the user accidentally altered the format of the spreadsheet, the import of the data could fail. We were close to a solution but it wasn’t reliable.Over the past few years, we've seen the proliferation of Apple iOS devices, specifically the iPhone and iPad, in the enterprise.  As our customers were asking for offline certification, we noticed the same population of users traditionally responsible for access certification, were early adopters of the iPad. The environment seemed ideal for us to create an iPad application to support offline certifications using Oracle Identity Analytics. That’s why we created IdentityCert™.IdentityCert allows users to view their analytics dashboard, complete user certifications, and resolve policy violations with OIA, from their iPads.The current IdentityCert analytics dashboard displays the same charts that are available in the Oracle Identity Analytics product. However, we plan to expand the number of available analytics in future releases.The main function of IdentityCert is user certification which can be performed quickly and efficiently using a simple touch interface. Managers tap into a certification, use simple gestures to claim users and certify their access.  Certifications can be securely downloaded to IdentityCert and can be completed with or without a network connection. The user can upload the completed certifications once they are connected to a cellular or wi-fi network.Oracle Identity Analytics can generate policy violation notifications based on detective scans of identity warehouse or via preventative analysis of identity access requests. IdentityCert allows users to view all policy violations, resolve, or delegate them to appropriate users. IdentityCert also analyzes the policy violation expression and produces more human friendly descriptions of the policy violation which improves the ability of users to resolve the violation. IdentityCert can be deployed quickly into a customer's environment. It is deployed with Hub City Media's ID Services to connect Oracle Identity Analytics securely with the iPad application.Oracle Identity Management 11g R2 is an important evolutionary release. Oracle's Identity Management suite has more characteristics of a cohesive platform. This platform provides an integrated set of identity services that can be used to protect, manage, and audit security within the enterprise. At HCM we take the platform concept a step further and see it as an opportunity to create unique solutions for Oracle Identity Management customers. IdentityCert is our commitment to this platform. You can download IdentityCert from the Apple iOS App Store today. It includes a demo dataset that you can use to explore the functions of the product without any server infrastructure. Download it. Give it a try. We would appreciate your interest and welcome any feedback.Resources:Press Release: Hub City Media Introduces iPad Application IdentityCert™ for Oracle Identity AnalyticsApp Store Download: http://bit.ly/IdentityCertOracle Identity Governance Suite

    Read the article

  • WebLogic Server–Use the Execution Context ID in Applications–Lessons From Hansel and Gretel

    - by james.bayer
    I learned a neat trick this week.  Don’t let your breadcrumbs go to waste like Hansel and Gretel did!  Keep track of the code path, logs and errors for each request as they flow through the system.  Earlier this week an OTN forum post in the WLS – General category by Oracle Ace John Stegeman asked a question how to retrieve the Execution Context ID so that it could be used on an error page that a user could provide to a help desk or use to check with application administrators so they could look up what went wrong.  What is the Execution Context ID (ECID)?  Fusion Middleware injects an ECID as a request enters the system and it says with the request as it flows from Oracle HTTP Server to Oracle Web Cache to multiple WebLogic Servers to the Oracle Database. It’s a way to uniquely identify a request across tiers.  According to the documentation it’s: The value of the ECID is a unique identifier that can be used to correlate individual events as being part of the same request execution flow. For example, events that are identified as being related to a particular request typically have the same ECID value.  The format of the ECID string itself is determined by an internal mechanism that is subject to change; therefore, you should not have or place any dependencies on that format. The novel idea that I see John had was to extend this concept beyond the diagnostic information that is captured by Fusion Middleware.  Why not also use this identifier in your logs and errors so you can correlate even more information together!  Your logging might already identify the user, so why not identify the request so you filter down even more.  All you need to do inside of WebLogic Server to get ahold of this information is invoke DiagnosticConextHelper: weblogic.diagnostics.context.DiagnosticContextHelper.getContextId() This class has other helpful methods to see other values tracked by the diagnostics framework too.  This way I can see even more detail and get information across tiers. In performance profiling, this can be very handy to track down where time is being spent in code.  I’ve blogged and made videos about this before.  JRockit Flight Recorder can use the WLDF Diagnostic Volume in WLS 10.3.3+ to automatically capture and correlate lots of helpful information for each request without installing any special agents and with the out-of-the-box JRockit and WLS settings!  You can see here how information is displayed in JRockit Flight Recorder about a single request as it calls a Servlet, which calls an EJB, which gets a DB connection, which starts a transaction, etc.  You can get timings around everything and even see the SQL that is used. http://download.oracle.com/docs/cd/E21764_01/web.1111/e13714/using_flightrecorder.htm#WLDFC480 Recent versions of the WLS console also are able to visualize this data too, so it works with other JVMs besides JRockit when you turn on WLDF instrumentation. I wrote a little sample application that verified to myself that the ECID did actually cross JVM boundaries.  I invoked a Servlet in one JVM, which acted as an EJB client to Stateless Session Bean running in another JVM.  Each call returned the same ECID.  You need to turn on WLDF Instrumentation for this to work otherwise the framework returns null.  I’m glad John put me on to this API as I have some interesting ideas on how to correlate some information together.

    Read the article

  • MaxTotalSizeInBytes - Blind spots in Usage file and Web Analytics Reports

    - by Gino Abraham
    Originally posted on: http://geekswithblogs.net/GinoAbraham/archive/2013/10/28/maxtotalsizeinbytes---blind-spots-in-usage-file-and-web-analytics.aspx http://blogs.msdn.com/b/sharepoint_strategery/archive/2012/04/16/usage-file-and-web-analytics-reports-with-blind-spots.aspx In my previous post (Troubleshooting SharePoint 2010 Web Analytics), I referenced a problem that can occur when exceeding the daily partition size for the LoggingDB, which generates the ULS message “[Partition] has exceeded the max bytes”. Below, I wanted to provide some additional info on this particular issue and help identify some options if this occurs. As an aside, this post only applies if you are missing portions of Usage data - think blind spots on intermittent days or user activity regularly sparse for the afternoon/evening. If this fits your scenario - read on. But if Usage logs are outright missing, go check out my Troubleshooting post first.  Background on the problem:The LoggingDB database has a default maximum size of ~6GB. However, SharePoint evenly splits this total size into fixed sized logical partitions – and the number of partitions is defined by the number of days to retain Usage data (by default 14 days). In this case, 14 partitions would be created to account for the 14 days of retention. If the retention were halved to 7 days, the LoggingDBwould be split into 7 corresponding partitions at twice the size. In other words, the partition size is generally defined as [max size for DB] / [number of retention days].Going back to the default scenario, the “max size” for the LoggingDB is 6200000000 bytes (~6GB) and the retention period is 14 days. Using our formula, this would be [~6GB] / [14 days], which equates to 444858368 bytes (~425MB) per partition per day. Again, if the retention were halved to 7 days (which halves the number of partitions), the resulting partition size becomes [~6GB] / [7 days], or ~850MB per partition.From my experience, when the partition size for any given day is exceeded, the usage logging for the remainder of the day is essentially thrown away because SharePoint won’t allow any more to be written to that day’s partition. The only clue that this is occurring (beyond truncated usage data) is an error such as the following that gets reported in the ULS:04/08/2012 09:30:04.78    OWSTIMER.EXE (0x1E24)    0x2C98    SharePoint Foundation    Health    i0m6     High    Table RequestUsage_Partition12 has 444858368 bytes that has exceeded the max bytes 444858368It’s also worth noting that the exact bytes reported (e.g. ‘444858368’ above) may slightly vary among farms. For example, you may instead see 445226812, 439123456, or something else in the ballpark. The exact number itself doesn't matter, but this error message intends to indicates that the reporting usage has exceeded the partition size for the given day.What it means:The error itself is easy to miss, which can lead to substantial gaps in the reporting data (your mileage may vary) if not identified. At this point, I can only advise to periodically check the ULS logs for this message. Down the road, I plan to explore if [Developing a Custom Health Rule] could be leveraged to identify the issue (If you've ever built Custom Health Rules, I'd be interested to hear about your experiences). Overcoming this issue also poses a challenge, with workaround options including:Lower the retentionBecause the partition size is generally defined as [max size] / [number of retention days], the first option is to lower the number of days to retain the data – the lower the retention, the lower the divisor and thus a bigger partition. For example, halving the retention from 14 to 7 days would halve the number of partitions, but double the partition size to ~850MB (e.g. [6200000000 bytes] / [7 days] = ~850GB partitions). Lowering it to 2 days would result in two ~3GB partitions… and so on.Recreate the LoggingDB with an increased sizeThe property MaxTotalSizeInBytes is exposed by OM code for the SPUsageDefinition object and can be updated with the example PowerShell snippet below. However, updating this value has no immediate impact because this size only applies when creating a LoggingDB. Therefore, you must create a newLoggingDB for the Usage Service Application. The gotcha: this effectively deletes all prior Usage databecause the Usage Service Application can only have a single LoggingDB.Here is an example snippet to update the "Page Requests" Usage Definition:$def=Get-SPUsageDefinition -Identity "page requests" $def.MaxTotalSizeInBytes=12400000000 $def.update()Create a new Logging database and attach to the Usage Service Application using the following command: Get-spusageapplication | Set-SPUsageApplication -DatabaseServer <dbServer> -DatabaseName <newDBname> Updated (5/10/2012): Once the new database has been created, you can confirm the setting has truly taken by running the following SQL Query (be sure to replace the database name in the following query with the name provided in the PowerShell above)SELECT * FROM [WSS_UsageApplication].[dbo].[Configuration] WITH (nolock) WHERE ConfigName LIKE 'Max Total Bytes - RequestUsage'

    Read the article

  • Oracle Social Network Developer Challenge: Fishbowl Solutions

    - by Kellsey Ruppel
    Originally posted by Jake Kuramoto on The Apps Lab blog. Today, I give you the final entry in the Oracle Social Network Developer Challenge, held last week during OpenWorld. This one comes from Friend of the ‘Lab and Fishbowl Solutions (@fishbowle20) hacker, John Sim (@jrsim_uix), whom you might remember from his XBox Kinect demo at COLLABORATE 12 (presentation slides and abstract) hacks and other exploits with WebCenter. We put this challenge together specifically for developers like John, who like to experiment with new tools and push the envelope of what’s possible and build cool things, and as you can see from his entry John did just that, mashing together Google Maps and Oracle Social Network into a mobile app built with PhoneGap that uses the device’s camera and GPS to keep teams on the move in touch. He calls it a Mobile GeoTagging Solution, but I think Avengers Assemble! would have equally descriptive, given that was obviously his inspiration. Here’s his description of the mobile app: My proposed solution was to design and simplify GeoLocation mapping, and automate updates for users and teams on the move; who don’t have access to a laptop or want to take their ipads out – but allow them to make quick updates to OSN and upload photos taken from their mobile device – there and then. As part of this; the plan was to include a rules engine that could be configured by the user to allow the device to automatically update and post messages when they arrived at a set location(s). Inspiration for this came from on{x} – automate your life. Unfortunately, John didn’t make it to the conference to show off his hard work in person, but luckily, he had a colleague from Fishbowl and a video to showcase his work.    Here are some shots of John’s mobile app for your viewing pleasure: John’s thinking is sound. Geolocation is usually relegated to consumer use cases, thanks to services like foursquare, but distributed teams working on projects out in the world definitely need a way to stay in contact. Consider a construction job. Different contractors all converge on a single location, and time is money. Rather than calling or texting each other and risking a distracted driving accident, an app like John’s allows everyone on the job to see exactly where the other contractors are. Using his GPS rules, they could easily be notified about how close each is to the site, definitely useful when you have a flooring contractor sitting idle, waiting for an electrician to finish the wiring. The best part is that the project manager or general contractor could stay updated on all the action (or inaction) using Oracle Social Network, either sitting at a desk using the browser app or desktop client or on the go, using one of the native mobile apps built for Oracle Social Network. I can see this being used by insurance adjusters too, and really any team that, erm, assembles at a given spot. Of course, it’s also useful for meeting at the pub after the day’s work is done. Beyond people, this solution could also be implemented for physical objects that are in route to a destination. Say you’re a customer waiting on rail shipment or a package delivery. You could track your valuable’s whereabouts easily as they report their progress via checkins. If they deviated from the GPS rules, you’d be notified. You might even be able to get a picture into Oracle Social Network with some light hacking. Thanks to John and his colleagues at Fishbowl for participating in our challenge. We hope everyone had a good experience. Make sure to check out John’s blog post on his work and the experience using Oracle Social Network. Although this is the final, official entry we had, tomorrow, I’ll show you the work of someone who finished code, but wasn’t able to make the judging event. Stay tuned.

    Read the article

  • Using a parser to locate faulty code

    - by ryan.riverside
    Lately I've been working a lot in PHP and have run into an abnormally large number of parsing errors. I realize these are my own fault and a result of sloppy initial coding on my part, but it's getting to the point that I'm spending more time resolving tags than developing. In the interest of not slamming my productivity, are there any tricks to locating the problem in the code? What I'd really be looking for would be a line to put in the code which would output the entire faulty tag in the parsing error, or something similar. Purely for reference sake, my current error is Parse error: syntax error, unexpected '}' in /home/content/80/9480880/html/cache/tpl_prosilver_viewtopic_body.html.php on line 50 (which refers to this): </dd><dd><?php if ($_poll_option_val['POLL_OPTION_RESULT'] == 0) { echo ((isset($this->_rootref['L_NO_VOTES'])) ? $this->_rootref['L_NO_VOTES'] : ((isset($user->lang['NO_VOTES'])) ? $user->lang['NO_VOTES'] : '{ NO_VOTES }')); } else { echo $_poll_option_val['POLL_OPTION_PERCENT']; } ?></dd> </dl> <?php }} if ($this->_rootref['S_DISPLAY_RESULTS']) { ?> <dl> <dt>&nbsp;</dt> <dd class="resultbar"><?php echo ((isset($this->_rootref['L_TOTAL_VOTES'])) ? $this->_rootref['L_TOTAL_VOTES'] : ((isset($user->lang['TOTAL_VOTES'])) ? $user->lang['TOTAL_VOTES'] : '{ TOTAL_VOTES }')); ?> : <?php echo (isset($this->_rootref['TOTAL_VOTES'])) ? $this->_rootref['TOTAL_VOTES'] : ''; ?></dd> </dl> <?php } if ($this->_rootref['S_CAN_VOTE']) { ?> <dl style="border-top: none;"> <dt>&nbsp;</dt> <dd class="resultbar"><input type="submit" name="update" value="<?php echo ((isset($this->_rootref['L_SUBMIT_VOTE'])) ? $this->_rootref['L_SUBMIT_VOTE'] : ((isset($user->lang['SUBMIT_VOTE'])) ? $user->lang['SUBMIT_VOTE'] : '{ SUBMIT_VOTE }')); ?>" class="button1" /></dd> </dl> <?php } if (! $this->_rootref['S_DISPLAY_RESULTS']) { ?> <dl style="border-top: none;"> <dt>&nbsp;</dt> <dd class="resultbar"><a href="<?php echo (isset($this->_rootref['U_VIEW_RESULTS'])) ? $this->_rootref['U_VIEW_RESULTS'] : ''; ?>"><?php echo ((isset($this->_rootref['L_VIEW_RESULTS'])) ? $this->_rootref['L_VIEW_RESULTS'] : ((isset($user->lang['VIEW_RESULTS'])) ? $user->lang['VIEW_RESULTS'] : '{ VIEW_RESULTS }')); ?></a></dd> </dl> <?php } ?> </fieldset></div>

    Read the article

  • Big Data – How to become a Data Scientist and Learn Data Science? – Day 19 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the analytics in Big Data Story. In this article we will understand how to become a Data Scientist for Big Data Story. Data Scientist is a new buzz word, everyone seems to be wanting to become Data Scientist. Let us go over a few key topics related to Data Scientist in this blog post. First of all we will understand what is a Data Scientist. In the new world of Big Data, I see pretty much everyone wants to become Data Scientist and there are lots of people I have already met who claims that they are Data Scientist. When I ask what is their role, I have got a wide variety of answers. What is Data Scientist? Data scientists are the experts who understand various aspects of the business and know how to strategies data to achieve the business goals. They should have a solid foundation of various data algorithms, modeling and statistics methodology. What do Data Scientists do? Data scientists understand the data very well. They just go beyond the regular data algorithms and builds interesting trends from available data. They innovate and resurrect the entire new meaning from the existing data. They are artists in disguise of computer analyst. They look at the data traditionally as well as explore various new ways to look at the data. Data Scientists do not wait to build their solutions from existing data. They think creatively, they think before the data has entered into the system. Data Scientists are visionary experts who understands the business needs and plan ahead of the time, this tremendously help to build solutions at rapid speed. Besides being data expert, the major quality of Data Scientists is “curiosity”. They always wonder about what more they can get from their existing data and how to get maximum out of future incoming data. Data Scientists do wonders with the data, which goes beyond the job descriptions of Data Analysist or Business Analysist. Skills Required for Data Scientists Here are few of the skills a Data Scientist must have. Expert level skills with statistical tools like SAS, Excel, R etc. Understanding Mathematical Models Hands-on with Visualization Tools like Tableau, PowerPivots, D3. j’s etc. Analytical skills to understand business needs Communication skills On the technology front any Data Scientists should know underlying technologies like (Hadoop, Cloudera) as well as their entire ecosystem (programming language, analysis and visualization tools etc.) . Remember that for becoming a successful Data Scientist one require have par excellent skills, just having a degree in a relevant education field will not suffice. Final Note Data Scientists is indeed very exciting job profile. As per research there are not enough Data Scientists in the world to handle the current data explosion. In near future Data is going to expand exponentially, and the need of the Data Scientists will increase along with it. It is indeed the job one should focus if you like data and science of statistics. Courtesy: emc Tomorrow In tomorrow’s blog post we will discuss about various Big Data Learning resources. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Nec USB 3.0 controller drops connection - Ubuntu 12.04.1

    - by Tom
    I have some serious problems with Technaxx pci-e 302p card. It has uPD720200a NEC chip with 4020 firmware. BIOS recognises it. Sometimes it recognises devices and system mounts them and are functional for few minutes, other times they can't be mount and error occours. After fresh install card worked fine, but after kernel and firmware update it behaves as mentioned. Outputs: uname -a Linux asd-GA-MA770-UD3 3.2.0-30-generic #48-Ubuntu SMP Fri Aug 24 16:52:48 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux lspci -vvv USB controller: NEC Corporation uPD720200 USB 3.0 Host Controller (rev 04) (prog-if 30 [XHCI]) Control: I/O- Mem+ BusMaster+ SpecCycle- MemWINV- VGASnoop- ParErr- Stepping- SERR- FastB2B- DisINTx+ Status: Cap+ 66MHz- UDF- FastB2B- ParErr- DEVSEL=fast >TAbort- <TAbort- <MAbort- >SERR- <PERR- INTx- Latency: 0, Cache Line Size: 64 bytes Interrupt: pin A routed to IRQ 17 Region 0: Memory at fd8fe000 (64-bit, non-prefetchable) [size=8K] Capabilities: <access denied> Kernel driver in use: xhci_hcd lsusb Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 006 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 007 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 008 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 009 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 003: ID 07d1:3c0a D-Link System DWA-140 RangeBooster N Adapter(rev.B2) [Ralink RT3072] Bus 005 Device 004: ID 1997:1221 Bus 005 Device 003: ID 15c2:003c SoundGraph Inc. lsmod Module Size Used by nls_iso8859_1 12713 0 nls_cp437 16991 0 vfat 17585 0 fat 61512 1 vfat vesafb 13844 1 saa7134_alsa 18602 1 rfcomm 47604 0 bnep 18281 2 bluetooth 180104 10 rfcomm,bnep tda827x 18182 1 snd_hda_codec_hdmi 32474 1 ir_lirc_codec 12859 0 lirc_dev 19204 1 ir_lirc_codec tda8290 22616 1 arc4 12529 2 snd_hda_codec_realtek 224173 1 ir_mce_kbd_decoder 12777 0 ir_sony_decoder 12510 0 ir_jvc_decoder 12507 0 tuner 27428 1 ir_rc6_decoder 12507 0 snd_hda_intel 33773 5 rt2800usb 22684 0 rt2800lib 58925 1 rt2800usb crc_ccitt 12667 1 rt2800lib rt2x00usb 20762 1 rt2800usb rt2x00lib 51144 3 rt2800usb,rt2800lib,rt2x00usb mac80211 506816 3 rt2800lib,rt2x00usb,rt2x00lib ir_rc5_decoder 12507 0 rc_avermedia_m135a 12526 0 rc_imon_pad 12505 0 ir_nec_decoder 12507 0 cfg80211 205544 2 rt2x00lib,mac80211 snd_hda_codec 127706 3 snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel snd_ctxfi 111202 2 snd_hwdep 13668 1 snd_hda_codec imon 32839 0 snd_pcm 97188 5 saa7134_alsa,snd_hda_codec_hdmi,snd_hda_intel,snd_hda_codec,snd_ctxfi snd_seq_midi 13324 0 saa7134 181851 1 saa7134_alsa videobuf_dma_sg 19354 2 saa7134_alsa,saa7134 snd_rawmidi 30748 1 snd_seq_midi snd_seq_midi_event 14899 1 snd_seq_midi joydev 17693 0 rc_core 26412 13 ir_lirc_codec,ir_mce_kbd_decoder,ir_sony_decoder,ir_jvc_decoder,ir_rc6_decoder,ir_rc5_decoder,rc_avermedia_m135a,rc_imon_pad,ir_nec_decoder,imon,saa7134 snd_seq 61896 2 snd_seq_midi,snd_seq_midi_event fglrx 3263886 101 videobuf_core 26390 2 saa7134,videobuf_dma_sg v4l2_common 16454 2 tuner,saa7134 videodev 98259 3 tuner,saa7134,v4l2_common sp5100_tco 13791 0 snd_timer 29990 2 snd_pcm,snd_seq snd_seq_device 14540 3 snd_seq_midi,snd_rawmidi,snd_seq snd 78855 28 saa7134_alsa,snd_hda_codec_hdmi,snd_hda_codec_realtek,snd_hda_intel,snd_hda_codec,snd_ctxfi,snd_hwdep,snd_pcm,snd_rawmidi,snd_seq,snd_timer,snd_seq_device v4l2_compat_ioctl32 17128 1 videodev tveeprom 21249 1 saa7134 i2c_piix4 13301 0 soundcore 15091 1 snd edac_core 53746 0 serio_raw 13211 0 snd_page_alloc 18529 3 snd_hda_intel,snd_ctxfi,snd_pcm edac_mce_amd 23709 0 wmi 19256 0 mac_hid 13253 0 ppdev 17113 0 parport_pc 32866 1 k10temp 13166 0 lp 17799 0 parport 46562 3 ppdev,parport_pc,lp usb_storage 49198 0 uas 18180 0 usbhid 47199 0 hid 99559 1 usbhid firewire_ohci 41000 0 firewire_core 63558 1 firewire_ohci crc_itu_t 12707 1 firewire_core floppy 70365 0 pata_atiixp 13204 2 r8169 62099 0 dmesg | tail after plugging to usb 3.0 port [ 834.871296] sd 9:0:0:0: rejecting I/O to offline device [ 834.871308] sd 9:0:0:0: rejecting I/O to offline device [ 834.871319] sd 9:0:0:0: rejecting I/O to offline device [ 834.871330] sd 9:0:0:0: rejecting I/O to offline device [ 834.871530] sd 9:0:0:0: [sdd] Unhandled error code [ 834.871536] sd 9:0:0:0: [sdd] Result: hostbyte=DID_NO_CONNECT driverbyte=DRIVER_OK [ 834.871545] sd 9:0:0:0: [sdd] CDB: Read(10): 28 00 0e 8e 48 0a 00 00 3e 00 [ 834.871564] end_request: I/O error, dev sdd, sector 244205578 [ 834.875497] sd 8:0:0:1: Device offlined - not ready after error recovery [ 834.885339] usb 9-2: USB disconnect, device number 2 Are there any other outputs need for answering? I'll post them ASAP. I could of course reject updating the system but I think it's halfway solution. Any help appreciated. BTW USB 2.0 and 1.1 ports run well, card itself runs under win7 as charm. Tom

    Read the article

  • SQL SERVER – A Puzzle Part 4 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value

    - by pinaldave
    It seems like every weekend I get a new puzzle in my mind. Before continuing I suggest you read my previous posts here where I have shared earlier puzzles. A Puzzle – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value  A Puzzle Part 2 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value A Puzzle Part 3 – Fun with SEQUENCE in SQL Server 2012 – Guess the Next Value After reading above three posts, I am very confident that you all will be ready for the next set of puzzles now. First execute the script which I have written here. Now guess what will be the next value as requested in the query. USE TempDB GO -- Create sequence CREATE SEQUENCE dbo.SequenceID AS DECIMAL(3,0) START WITH 1 INCREMENT BY -1 MINVALUE 1 MAXVALUE 3 CYCLE NO CACHE; GO SELECT next value FOR dbo.SequenceID; -- Guess the number SELECT next value FOR dbo.SequenceID; -- Clean up DROP SEQUENCE dbo.SequenceID; GO Please note that Starting value is 1, Increment value is the negative value of -1 and Minimum value is 3. Now let us first assume how this will work out. In our example of the sequence starting value is equal to 1 and decrement value is -1, this means the value should decrement from 1 to 0. However, the minimum value is 1. This means the value cannot further decrement at all. What will happen here? The natural assumption is that it should throw an error. How many of you are assuming about query will throw an ERROR? Well, you are WRONG! Do not blame yourself, it is my fault as I have told you only half of the story. Now if you have voted for error, let us continue running above code in SQL Server Management Studio. The above script will give the following output: Isn’t it interesting that instead of error out it is giving us result value 3. To understand the answer about the same, carefully observe the original syntax of creating SEQUENCE – there is a keyword CYCLE. This keyword cycles the values between the minimum and maximum value and when one of the range is exhausted it cycles the values from the other end of the cycle. As we have negative incremental value when query reaches to the minimum value or lower end it will cycle it from the maximum value. Here the maximum value is 3 so the next logical value is 3. If your business requirement is such that if sequence reaches the maximum or minimum value, it should throw an error, you should not use the keyword cycle, and it will behave as discussed. I hope, you are enjoying the puzzles as much as I am enjoying it. If you have any interesting puzzle to share, please do share with me and I will share this on blog with due credit to you. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Windows Azure Recipe: Social Web / Big Media

    - by Clint Edmonson
    With the rise of social media there’s been an explosion of special interest media web sites on the web. From athletics to board games to funny animal behaviors, you can bet there’s a group of people somewhere on the web talking about it. Social media sites allow us to interact, share experiences, and bond with like minded enthusiasts around the globe. And through the power of software, we can follow trends in these unique domains in real time. Drivers Reach Scalability Media hosting Global distribution Solution Here’s a sketch of how a social media application might be built out on Windows Azure: Ingredients Traffic Manager (optional) – can be used to provide hosting and load balancing across different instances and/or data centers. Perfect if the solution needs to be delivered to different cultures or regions around the world. Access Control – this service is essential to managing user identity. It’s backed by a full blown implementation of Active Directory and allows the definition and management of users, groups, and roles. A pre-built ASP.NET membership provider is included in the training kit to leverage this capability but it’s also flexible enough to be combined with external Identity providers including Windows LiveID, Google, Yahoo!, and Facebook. The provider model has extensibility points to hook into other identity providers as well. Web Role – hosts the core of the web application and presents a central social hub users. Database – used to store core operational, functional, and workflow data for the solution’s web services. Caching (optional) – as a web site traffic grows caching can be leveraged to keep frequently used read-only, user specific, and application resource data in a high-speed distributed in-memory for faster response times and ultimately higher scalability without spinning up more web and worker roles. It includes a token based security model that works alongside the Access Control service. Tables (optional) – for semi-structured data streams that don’t need relational integrity such as conversations, comments, or activity streams, tables provide a faster and more flexible way to store this kind of historical data. Blobs (optional) – users may be creating or uploading large volumes of heterogeneous data such as documents or rich media. Blob storage provides a scalable, resilient way to store terabytes of user data. The storage facilities can also integrate with the Access Control service to ensure users’ data is delivered securely. Content Delivery Network (CDN) (optional) – for sites that service users around the globe, the CDN is an extension to blob storage that, when enabled, will automatically cache frequently accessed blobs and static site content at edge data centers around the world. The data can be delivered statically or streamed in the case of rich media content. Training These links point to online Windows Azure training labs and resources where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • 2014 Conferences - JFokus, JavaLand & GeeCon!

    - by Heather VanCura
    There has been a delay in publishing these past event summaries from early 2014--JFokus in February, JavaLand in March, and GeeCon in May. As we plan for Devoxx UK next week, I found these summaries that did not make it past 'draft' stage.  We had some great successes with the first three events of 2014, a Java developer conference trifecta! Participation topics included Java, the JCP program overall and the Adopt-a-JSR programs.   First up in February was JFokus in Stockholm. The energy and talent in Stockholm is amazing and the conference organizers do a stellar job running it and welcoming the speakers of this event.  I enjoyed the city walk and speaker dinner, as well as many opportunities to interact with conference speakers and attendees, both during and after the conference hours. Reza Rehman invited me to speak during his Java EE 7 lab session about the Adopt-a-JSR program, and I gave a quickie session on the JCP and Adopt-a-JSR.  There was also a late night Birds of a Feather (BoF) session held jointly with Cecelia Borg, Martijn Verburg and Reza Rehman.  This was an interactive conversation with a focus on the Java EE community survey results and encouraging more community participation and collaboration in Java development.  The Java 8 keynote by Georges Saab and Mark Reinhold was also very entertaining,  I was sorry to miss FOSDEM happening the previous weekend this year in Brussels, but I hope to attend in 2015.  Favorite take home gift -- Lambdas cap! In March, the inaugural version of the JavaLand conference happened inside Phantasialand, an amusement park in Germany. Markus Eisele suggested having an Early Adopters area at the conference, which I was keen to implement. In 2013 at Devoxx Belgium we held some activities in the Hackergaren area around Lambdas and Java EE 7, so this was a great opportunity to expand on a more interactive conference format and Andreas Badelt from the program committee helped in the planning for this area.  Daniel Bryant and Mani Sarkar from the London Java Community led some general Adopt-a-JSR discussions and AdoptOpen JDK activities.  JCP Spec Leads, Anatole Tresch from Credit Suisse, leading JSR 354, Money & Currency API, and Ed Burns from Oracle, leading JSR 344, JavaServer Faces 2.2, attended to engage with conference attendees on their JSRs.  Favorite - Stephen Chin's roller coaster video. In May, GeeCon in Krakow was anther awesome conference!  The conference organizers were warm and welcoming and I enjoyed time getting to know the other speakers at the event. There was a JCP and Adopt-a-JSR participation session as well as a moderated panel session on Early Adopters.  We had an amazing panel -- Daniel Bryant, Arun Gupta, Tomasz Borek , and Peter Lawrey. The panel discussed the Adopt-a-JSR and Adopt OpenJDK program, and how the participants work together to get involved and contribute to both the Java SE and Java EE platforms.  If was an interesting discussion and sparked some new ideas on how Java User Groups in Poland and around the world can contribute in a significant and meaningful way to create better and more practical Java standards today and in the future.  Favorite take home gift - GeeCon mug!   These were some of the highlights of the events--looking forward to Devoxx UK next week.  I will publish these details tomorrow!

    Read the article

  • 45 minutes to talk about C# [closed]

    - by Philip
    I have the opportunity to give a 45 minute talk on C# in the theory of programming languages class I'm taking. The college teaches Java almost exclusively, so that's what all the students are most familiar with. (There's a little C, assembly, Prolog and LISP as well.) I decide what to talk about. It seems to me the best approach is to focus on a few of the big, obvious differences between C# and Java. I don't intend it to be a recommendation to use C# -- there are reasons to use each, mostly because of their ecosystems. So I want to focus on C# as a language. I don't want to go too fast and end up listing a whole bunch of features without showing their usefulness. My current plan is this: Functions as first class objects. This is, in my opinion, one of the biggest differences between C# and Java. The professor briefly mentioned this notion and showed a LISP example, but many of the students have probably never used it. I can show real world examples where it's made my code more readable. Lambda expressions as concise syntax for anonymous functions. Obviously with examples to show how this is useful. The real hit-home examples will be at the end when it's combined with the rest. I don't see an advantage to first showing the old delegate syntax and then replacing it with lambdas -- most of us won't have ever seen delegates anyway so it would just be confusing. The yield keyword and how it's different from returning an array. I have the impression that a lot of C# developers aren't familiar with how to use this. It will likely be very foreign to Java developers. I have some examples from my own work where it was really useful, such as iterating over a tree traversal, or iterating over neighbors in a graph where the neighbors aren't stored in memory. In both cases, doing it in Java would likely mean returning a complete list -- with yield I can stop iterating if I find what I want early on, without using memory for superfluous lists or arrays. Extension methods as a way to write implementation on interfaces. We'll all be familiar with how interfaces don't allow method implementation, and how this leads to code duplication. I'll show a specific example of this and how the extension method can solve the problem. Demonstrate how the above can be combined by implementing some simple Linq methods and using them. Where, Select, First, maybe more depending on how much time is left. Ideas on which ones might 'hit home' the best? There are other things I could talk about such as generics, value types, properties and more. I haven't yet though of good ways to incorporate these. In the case of generics and value types, the advantages might not be obvious or as relevant. Properties are obviously useful, particularly since we're taught strict JavaBeans here, but I don't know if I could integrate it with the "path to Linq" discussion above without it feeling tacked on. So I'm looking for thoughts on how to talk about C#, and what to talk about. Even minor details. I'm sure there are more experienced C# developers than me here who have good insight about what's really important in the language, and what would miss the point.

    Read the article

  • Abstraction, Politics, and Software Architecture

    Abstraction can be defined as a general concept and/or idea that lack any concrete details. Throughout history this type of thinking has led to an array of new ideas and innovations as well as increased confusion and conspiracy. If one was to look back at our history they will see that abstraction has been used in various forms throughout our past. When I was growing up I do not know how many times I heard politicians say “Leave no child left behind” or “No child left behind” as a major part of their campaign rhetoric in regards to a stance on education. As you can see their slogan is a perfect example of abstraction because it only offers a very general concept about improving our education system but they do not mention how they would like to do it. If they did then they would be adding concrete details to their abstraction thus turning it in to an actual working plan as to how we as a society can help children succeed in school and in life, but then they would not be using abstraction. By now I sure you are thinking what does abstraction have to do with software architecture. You are valid in thinking this way, but abstraction is a wonderful tool used in information technology especially in the world of software architecture. Abstraction is one method of extracting the concepts of an idea so that it can be understood and discussed by others of varying technical abilities and backgrounds. One ways in which I tend to extract my architectural design thoughts is through the use of basic diagrams to convey an idea for a system or a new feature for an existing application. This allows me to generically model an architectural design through the use of views and Unified Markup Language (UML). UML is a standard method for creating a 4+1 Architectural View Models. The 4+1 Architectural View Model consists of 4 views typically created with UML as well as a general description of the concept that is being expressed by a model. The 4+1 Architectural View Model: Logical View: Models a system’s end-user functionality. Development View: Models a system as a collection of components and connectors to illustrate how it is intended to be developed.  Process View: Models the interaction between system components and connectors as to indicate the activities of a system. Physical View: Models the placement of the collection of components and connectors of a system within a physical environment. Recently I had to use the concept of abstraction to express an idea for implementing a new security framework on an existing website. My concept would add session based management in order to properly secure and allow page access based on valid user credentials and last user activity.  I created a basic Process View by using UML diagrams to communicate the basic process flow of my changes in the application so that all of the projects stakeholders would be able to understand my idea. Additionally I created a Logical View on a whiteboard while conveying the process workflow with a few stakeholders to show how end-user will be affected by the new framework and gaining additional input about the design. After my Logical and Process Views were accepted I then started on creating a more detailed Development View in order to map how the system will be built based on the concept of components and connections based on the previously defined interactions. I really did not need to create a Physical view for this idea because we were updating an existing system that was already deployed based on an existing Physical View. What do you think about the use of abstraction in the development of software architecture? Please let me know.

    Read the article

  • ArchBeat Link-o-Rama for 2012-08-30

    - by Bob Rhubart
    Next Generation Mobile Clients for Oracle Applications & the role of Oracle Fusion Middleware | Manish Palaparthy Manish Palaparthy examines some of Oracle's mobile applications, and takes a look at the underlying technology. Master Data Management: A Foundation for Big Data Analysis | Manouj Tahiliani "Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data." — Manouj Tahiliani Architect Day: Boston - Agenda Update Here's the latest updated information on the session schedule and content for Oracle Technology Network Architect Day in Boston, MA on September 12, 2012. Registration is open, but seating is limited. OTN Architect Day: Boston is being held on Wednesday September 12, 2012, 8:00 a.m. – 5:00 p.m., at the Boston Marriott Burlington, One Burlington Mall Road, Burlington, MA 01803. Integrating Coherence & Java EE 6 Applications using ActiveCache | Ricardo Ferreira The seamless integration between Oracle Coherence and Oracle WebLogic Server "provides a comprehensive environment to develop applications without the complexity of extra Java code to manage cache as a dependency," explains Ricardo Ferreira, "since Oracle provides a DI (Dependency Injection) mechanism for Coherence, the same DI mechanism available in standard Java EE applications. This feature is called ActiveCache." Ricardo shows you how to configure ActiveCache in WebLogic and your Java EE application. Cloud Infrastructure has a new standard from the DMTF "Unlike a de facto standard where typically one vendor has change control over the interface, and everyone else has to reverse engineer the inner workings of it, [Cloud Infrastructure Management Interface (CIMI)] is a de jure standard that is under change control of a standards body. One reason the standard took two years to create is that we factored in use cases, requirements and contributed APIs from multiple vendors. These vendors have products shipping today and as a result CIMI has a strong foundation in real world experience." Oracle GoldenGate 11g Release Launch Webcast- September 12 The new release of Oracle GoldenGate 11g is now available for major databases and platforms. Register for this webcast and live Q&A with product experts to learn about the solution's new features. September 12, 2012. 8:00am AM and 10:00AM PT. Speakers: Doug Reid (Director, Product Management, Oracle GoldenGate), Irem Radzik (Director, Product Marketing, Oracle Data Integration Products) Thought for the Day "[When] asking skilled architects…what they do when confronted with highly complex problems… [they] would most likely answer, 'Just use Common Sense.' [A] better expression than 'common sense' is 'contextual sense'—a knowledge of what is reasonable within a given context. Practicing architects through eduction, experience and examples accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • ArchBeat Link-o-Rama Top 10 for September 2-8, 2012

    - by Bob Rhubart
    The Top 10 items shared on the OTN Facebook Page for the week of September 2-8, 2012. Adding a runtime LOV for a taskflow parameter in WebCenter | Yannick Ongena Oracle ACE Yannick Ongena illustrates how to customize the parameters tab for a taskflow in WebCenter. Tips on Migrating from AquaLogic .NET Accelerator to WebCenter WSRP Producer for .NET | Scott Nelson "It has been a very winding path and this blog entry is intended to share both the lessons learned and relevant approaches that led to those learnings," says Scott Nelson. "Like most journeys of discovery, it was not a direct path, and there are notes to let you know when it is practical to skip a section if you are in a hurry to get from here to there." Free Event: Oracle Technology Network Architect Day – Boston, MA – 9/12/2012 Sure, you could ask a voodoo priestess for help in improving your solution architecture skills. But there's the whole snake thing, and the zombie thing, and other complications. So why not keep it simple and register for Oracle Technology Network Architect Day in Boston, MA. There's no magic, just a full day of technical sessions covering Cloud, SOA, Engineered Systems, and more. Registration is free, but seating is limited. You'll curse yourself if you miss this one. Starting and Stopping Fusion Applications the Right Way | Ronaldo Viscuso While the fastartstop tool that ships with Oracle Fusion Applications does most of the work to start/stop/bounce the Fusion Apps environment, it does not do it all. Oracle Fusion Applications A-Team blogger Ronaldo Viscuso's post "aims to explain all tasks involved in starting and stopping a Fusion Apps environment completely." Article Index: Architect Community Column in Oracle Magazine Did you know that Oracle Magazine features a regular column devoted specifically to the architect community? Every issue includes insight and expertise from architects who regularly work with Oracle Technologies. Click here to see a complete list of these articles. Using FMAP and AnalyticsRes in a Oracle BI High Availability Implementation | Art of Business Intelligence "The fmap syntax has been used for a long time in Oracle BI / Siebel Analytics when referencing images inherent in the application as well as custom images," says Oracle ACE Christian Screen. "This syntax is used on Analysis requests an dashboards." Dodeca Customer Feedback - The Rosewood Company | Tim Tow Oracle ACE Director Tim Tow shares anecdotal comments from one of his clients, a company that is deploying Dodeca to replace an aging VBA/Essbase application. Configuring UCM cache to check for external Content Server changes | Martin Deh Oracle WebCenter and ADF A-Team blogger shares the background information and the solution to a recently encountered customer scenario. Attend OTN Architect Day in Los Angeles – by Architects, for Architects – October 25 The OTN Architect Day roadshow stops in Boston next week, then it's on to Los Angeles for another all architecture, all day event on Thursday October 25, 2012 at the Sofitel Los Angeles, 555 Beverly Boulevard, Los Angeles, CA 90048. Like all Architect Day events, this one is absolutley free, so register now. The Role of Oracle VM Server for SPARC In a Virtualization Strategy New OTN article from Matthias Pfutzner. Thought for the Day "Practicing architects, through eduction, experience and examples, accumulate a considerable body of contextual sense by the time they're entrusted with solving a system-level problem…" — Eberhardt Rechtin (January 16, 1926 – April 14, 2006) Source: SoftwareQuotes.com

    Read the article

  • Functional/nonfunctional requirements VS design ideas

    - by Nicholas Chow
    Problem domain Functional requirements defines what a system does. Non-Functional requirements defines quality attributes of what the system does as a whole.(performance, security, reliability, volume, useability, etc.) Constraints limits the design space, they restrict designers to certain types of solutions. Solution domain Design ideas , defines how the system does it. For example a stakeholder need might be we want to increase our sales, therefore we must improve the usability of our webshop so more customers will purchase, a requirement can be written for this. (problem domain) Design takes this further into the solution domain by saying "therefore we want to offer credit card payments in addition to the current prepayment option". My problem is that the transition phase from requirement to design seems really vague, therefore when writing requirements I am often confused whether or not I incorporated design ideas in my requirements, that would make my requirement wrong. Another problem is that I often write functional requirements as what a system does, and then I also specify in what timeframe it must be done. But is this correct? Is it then a still a functional requirement or a non functional one? Is it better to seperate it into two distinct requirements? Here are a few requirements I wrote: FR1 Registration of Organizer FR1 describes the registration of an Organizer on CrowdFundum FR1.1 The system shall display a registration form on the website. FR1.2 The system shall require a Name, Username, Document number passport/ID card, Address, Zip code, City, Email address, Telephone number, Bank account, Captcha code on the registration form when a user registers. FR1.4 The system shall display an error message containing: “Registration could not be completed” to the subscriber within 1 seconds after the system check of the registration form was unsuccessful. FR1.5 The system shall send a verification email containing a verification link to the subscriber within 30 seconds after the system check of the registration form was successful. FR1.6 The system shall add the newly registered Organizer to the user base within 5 seconds after the verification link was accessed. FR2 Organizer submits a Project FR2 describes the submission of a Project by an Organizer on CrowdFundum - FR2 The system shall display a submit Project form to the Organizer accounts on the website.< - FR2.3 The system shall check for completeness the Name of the Project, 1-3 Photo’s, Keywords of the Project, Punch line, Minimum and maximum amount of people, Funding threshold, One or more reward tiers, Schedule of when what will be organized, Budget plan, 300-800 Words of additional information about the Project, Contact details within 1 secondin after an Organizer submits the submit Project form. - FR2.8 The system shall add to the homepage in the new Projects category the Project link within 30 seconds after the system made a Project webpage - FR2.9 The system shall include in the Project link for the homepage : Name of the Project, 1 Photo, Punch line within 30 seconds after the system made a Project webpage. Questions: FR 1.1 : Have I incorporated a design idea here, would " the system shall have a registration form" be a better functional requirement? F1.2 ,2.3 : Is this not singular? Would the conditions be better written for each its own separate requirement FR 1.4: Is this a design idea? Is this a correct functional requirement or have I incorporated non functional(performance) in it? Would it be better if I written it like this: FR1 The system shall display an error message when check is unsuccessful. NFR: The system will respond to unsuccesful registration form checks within 1 seconds. Same question with FR 2.8 and 2.9. FR2.3: The system shall check for "completeness", is completeness here used ambigiously? Should I rephrase it? FR1.2: I added that the system shall require a "Captcha code" is this a functional requirement or does it belong to the "security aspect" of a non functional requirement. I am eagerly waiting for your response. Thanks!

    Read the article

  • From DBA to Data Analyst

    - by Denise McInerney
    Cross posted from the PASS Blog There is a lot changing in the data professional’s world these days. More data is being produced and stored. More enterprises are trying to use that data to improve their products and services and understand their customers better. More data platforms and tools seem to be crowding the market. For a traditional DBA this can be a confusing and perhaps unsettling time. It’s also a time that offers great opportunity for career growth. I speak from personal experience. We sometimes refer to the “accidental DBA”, the person who finds herself suddenly responsible for managing the database because she has some other technical skills. While it was not accidental, six months ago I was unexpectedly offered a chance to transition out of my DBA role and become a data analyst. I have since come to view this offer as a gift, though at the time I wasn’t quite sure what to do with it. Throughout my DBA career I’ve gotten support from my PASS friends and colleagues and they were the first ones I turned to for counsel about this new situation. Everyone was encouraging and I received two pieces of valuable advice: first, leverage what I already know about data and second, work to understand the business’ needs. Bringing the power of data to bear to solve business problems is really the heart of the job. The challenge is figuring out how to do that. PASS had been the source of much of my technical training as a DBA, so I naturally started there to begin my Business Intelligence education. Once again the Virtual Chapter webinars, local chapter meetings and SQL Saturdays have been invaluable. I work in a large company where we are fortunate to have some very talented data scientists and analysts. These colleagues have been generous with their time and advice. I also took a statistics class through Coursera where I got a refresher in statistics and an introduction to the R programming language. And that’s not the end of the free resources available to someone wanting to acquire new skills. There are many knowledgeable Business Intelligence and Analytics professionals who teach through their blogs. Every day I can learn something new from one of these experts. Sometimes we plan our next career move and sometimes it just happens. Either way a database professional who follows industry developments and acquires new skills will be better prepared when change comes. Take the opportunity to learn something about the changing data landscape and attend a Business Intelligence, Business Analytics or Big Data Virtual Chapter meeting. And if you are moving into this new world of data consider attending the PASS Business Analytics Conference in April where you can meet and learn from those who are already on that road. It’s been said that “the only thing constant is change.” That’s never been more true for the data professional than it is today. But if you are someone who loves data and grasps its potential you are in the right place at the right time.

    Read the article

  • Hey Retailers, Are You Ready For The Holiday Season?

    - by Jeri Kelley
    With online holiday spending reaching $35.3 billion in 2011 and American shoppers spending just under $750 on average on their holiday purchases this year, how ready is your business for the 2012 holiday season?   ?? Today’s shoppers do not take their purchases lightly.  They are more connected, interact with more resources to make decisions, diligently compare products and services, seek out the best deals, and ask for input from friends and family.   This holiday season, as consumers browse for apparel, tablets, toys, and much more, they will be bombarded with retailer communication - from emails and commercials to countless search engine results and social recommendations.  With a flurry of activity coming at consumers from every channel and competitor, your success this year will rely on communicating a consistent, personalized message no matter where your customers are shopping.  Here are a few ideas to help with your commerce strategy this holiday season: CONSISTENCY COUNTS FOR MULTICHANNEL SHOPPERS??According to a November 2011 study commissioned by Oracle, “Channel Commerce 2011: The Consumer View,” 54% of consumers in the U.S. and Canada regularly employ two or more channels before they make a purchase.  While each channel has its own unique benefit, user profile, and purpose, it’s critical that your shoppers have a consistent core experience wherever they’re looking for information or making a purchase.  Be sure consumers can consistently search and browse the same product information and receive the same promotions online, on their mobile devices, and in-store.? USE YOUR CUSTOMER’S CONTEXT TO SURFACE RELEVANT CONTENTYour Web site is likely the hub of your holiday activity.  According to a Monetate infographic, 39% of shoppers will visit your Web site directly to find out about the best holiday deals.   Use everything you know about your customers from past purchase data to browsing history to provide a relevant experience at every click, and assemble content in a context that entices shoppers to buy online, or influences an offline purchase.? TAKE ADVANTAGE OF MOBILE BEHAVIOR?Having a mobile program is no longer a choice.   Armed with smartphones and tablets, consumers now have access to more and more product information and can compare products and prices from anywhere.  In fact, approximately 52% of smartphone users will use their device to research products, redeem coupons and use apps to assist in their holiday gift purchase.  At a minimum, be sure your mobile environment has store information, consistent pricing and promotions, and simple checkout capabilities. ARM IN-STORE ASSOCIATES WITH TABLETS?According to RISNews.com, 31% of retailers plan to begin testing tablets in stores in 2012, 22% have already begun such testing and 6% had fully deployed tablets within stores.   Take advantage of this compelling sales tool to get shoppers interacting with videos, user reviews, how-to guides, side-by-side product comparisons, and specs.  Automatically trigger upsell and cross sell suggestions for store associates to recommend for each product or category, build in alerts for promotions, and allow associates to place orders and check inventory from their tablet.  ? WISDOM OF THE CROWDS IS GOOD, BUT WISDOM FROM FRIENDS IS BETTER?Shoppers who grapple with options are looking for recommendations; they’d rather get advice from friends, and they’re more likely to spend more while doing so.    In fact, according to an infographic by Mr. Youth, 66% of social media users made a purchase on Black Friday or Cyber Monday as a direct result of social media interactions with brands or family.   This holiday season, be sure you are leveraging your social channels from Facebook to Pinterest to drive consistent promotions and help your brand to become part of the conversation. So, are you ready for the holidays this year?  

    Read the article

  • Oracle MDM Maturity Model

    - by David Butler
    A few weeks ago, I discussed the results of a survey conducted by Oracle’s Insight team. The survey was based on the data management maturity model that the Oracle Insight team has developed over the years as they analyzed customer IT organizations to help them get more out of everything they already have. I thought you might like to learn more about the maturity model itself. It can help you figure out where you stand when it comes to getting your organizations data management act together. The model covers maturity levels around five key areas: Profiling data sources; Defining a data strategy; Defining a data consolidation plan; Data maintenance; and Data utilization. Profile data sources: Profiling data sources involves taking an inventory of all data sources from across your IT landscape. Then evaluate the quality of the data in each source system. This enables the scoping of what data to collect into an MDM hub and what rules are needed to insure data harmonization across systems. Define data strategy: A data strategy requires an understanding of the data usage. Given data usage, various data governance requirements need to be developed. This includes data controls and security rules as well as data structure and usage policies. Define data consolidation strategy: Consolidation requires defining your operational data model. How integration is to be accomplished. Cross referencing common data attributes from multiple systems is needed. Synchronization policies also need to be developed. Data maintenance: The desired standardization needs to be defined, including what constitutes a ‘match’ once the data has been standardized. Cleansing rules are a part of this methodology. Data quality monitoring requirements also need to be defined. Utilize the data: What data gets published, and who consumes the data must be determined. How to get the right data to the right place in the right format given its intended use must be understood. Validating the data and insuring security rules are in place and enforced are crucial aspects for full no-risk data utilization. For each of the above data management areas, a maturity level needs to be assessed. Where your organization wants to be should also be identified using the same maturity levels. This results in a sound gap analysis your organization can use to create action plans to achieve the ultimate goals. Marginal is the lowest level. It is characterized by manually maintaining trusted sources; lacking or inconsistent, silo’d structures with limited integration, and gaps in automation. Stable is the next leg up the MDM maturity staircase. It is characterized by tactical MDM implementations that are limited in scope and target a specific division.  It includes limited data stewardship capabilities as well. Best Practice is a serious MDM maturity level characterized by process automation improvements. The scope is enterprise wide. It is a business solution that provides a single version of the truth, with closed-loop data quality capabilities. It is typically driven by an enterprise architecture group with both business and IT representation.   Transformational is the highest MDM maturity level. At this level, MDM is quantitatively managed. It is integrated with Business Intelligence, SOA, and BPM. MDM is leveraged in business process orchestration. Take an inventory using this MDM Maturity Model and see where you are in your journey to full MDM maturity with all the business benefits that accrue to organizations who have mastered their data for the benefit of all operational applications, business processes, and analytical systems. To learn more, Trevor Naidoo and I have written the Oracle MDM Maturity Model whitepaper. It’s free, so go ahead and download it and use it as you see fit.

    Read the article

  • Do you want to be an ALM Consultant?

    - by Martin Hinshelwood
    Northwest Cadence is looking for our next great consultant! At Northwest Cadence, we have created a work environment that emphasizes excellence, integrity, and out-of-the-box thinking.  Our customers have high expectations (rightfully so) and we wouldn’t have it any other way!   Northwest Cadence has some of the most exciting customers I have ever worked with and even though I have only been here just over a month I have already: Provided training/consulting for 3 government departments Created and taught courseware for delivering Scrum to teams within a high profile multinational company Started presenting Microsoft's ALM Engagement Program  So if you are interested in helping companies build better software more efficiently, then.. Enquire at [email protected] Application Lifecycle Management (ALM) Consultant An ALM Consultant with a minimum of 8 years of relevant experience with Application Lifecycle Management, Visual Studio (including Visual Studio Team System) and software design is needed. Must provide thought leadership on best practices for enterprise architecture, understand the Microsoft technology solution stack, and have a thorough understanding of enterprise application integration. The ALM Practice Lead will play a central role in designing and implementing the overall ALM Practice strategy, including creating, updating, and delivering ALM courseware and consultancy engagements. This person will also provide project support, deliverables, and quality solutions on Visual Studio Team System that exceed client expectations. Engagements will vary and will involve providing expert training, consulting, mentoring, formulating technical strategies and policies and acting as a “trusted advisor” to customers and internal teams. Sound sense of business and technical strategy required. Strong interpersonal skills as well as solid strategic thinking are key. The ideal candidate will be capable of envisioning the solution based on the early client requirements, communicating the vision to both technical and business stakeholders, leading teams through implementation, as well as training, mentoring, and hands-on software development. The ideal candidate will demonstrate successful use of both agile and formal software development methods, enterprise application patterns, and effective leadership on prior projects. Job Requirements Minimum Education: Bachelor’s Degree (computer science, engineering, or math preferred). Locale / Travel: The Practice Lead position requires estimated 50% travel, most of which will be in the Continental US (a valid national Passport must be maintained).  This is a full time position and will be based in the Kirkland office. Preferred Education: Master’s Degree in Information Technology or Software Engineering; Premium Microsoft Certifications on .NET (MCSD) or MCPD or relevant experience; Microsoft Certified Trainer (MCT) or relevant experience. Minimum Experience and Skills: 7+ years experience with business information systems integration or custom business application design and development in a professional technology consulting, corporate MIS or software development environment. Essential Duties & Responsibilities: Provide training, consulting, and mentoring to organizations on topics that include Visual Studio Team System and ALM. Create content, including labs and demonstrations, to be delivered as training classes by Northwest Cadence employees. Lead development teams through the complete ALM and/or Visual Studio Team System solution. Be able to communicate in detail how a solution will integrate into the larger technical problem space for large, complex enterprises. Define technical solution requirements. Provide guidance to the customer and project team with respect to technical feasibility, complexity, and level of effort required to deliver a custom solution. Ensure that the solution is designed, developed and deployed in accordance with the agreed upon development work plan. Create and deliver weekly status reports of training and/or consulting progress. Engagement Responsibilities: · Provide a strong desire to provide thought leadership related to technology and to help grow the business. · Work effectively and professionally with employees at all levels of a customer’s organization. · Have strong verbal and written communication skills. · Have effective presentation, organizational and planning skills. · Have effective interpersonal skills and ability to work in a team environment. Enquire at [email protected]

    Read the article

  • Ajax Autocomplete Extender

    - by Jason Ulloa
    El objetivo de este post es preparar un ejemplo sobre un tema que es planteado muy frecuentemente en los Foros de MSDN, como realizar un Autocomplete contra una base de datos. Qué requerimos? Antes de poder realizar un Autocomplete debemos tener en cuenta los elementos principales que requerimos para poder hacerlo funcionar, descritos de la siguiente manera: 1. Textbox: Nuestro grandioso amigo Textbox, que será donde el usuario ingresará los datos a buscar. 2. Un Webservice: que contendrá el método que se conectara a la base de datos y devolverá una lista con la información encontrada. 3. Ajax Autocomplete Extender: este es por decirlo así, el elemento más importante. Nos servirá como medio de enlace entre el webservice que expone el método y el textbox recuperando y mostrando los datos en forma de lista desplegable. La implementación Si bien parecierá complicado, crear un autocomplete extender es bastante sencillo. Empezaremos creando un nuevo sitio asp.net, en este sitio agregaremos un textbox y dos controles muy importantes de Ajax el ToolkitScriptManager para controlar el rende rizado de los script de ajax y el AutocompleteExtender que, como mencione anteriormente, será el medio de enlace. Antes de mostrar como quedará el código de lo anterior, explicaré algunas propiedades del AutocompleteExtender para que se entienda de mejor manera: 1. El ServicePath: contiene la ruta relativa al webservice que utilizaremos. 2. MinimumPrefixLength: se refiere al número de caracteres que deben ser digitados antes de iniciar la búsqueda. 3. ServiceMethod: el nombre del metodo de nuestro webservice que se encargará de devolver los datos. 4. EnableCaching: para mantener en cache los datos consultados, obteniendo mayor velocidad. 5. TargetControlID: una de las propiedades más importantes, acá se coloca el nombre del textbox al cual se unirá el Autocomplete 6. CompletionInterval: tiempo que debe transcurrir antes de iniciar con el trabajo de los datos. Una vez, explicadas las propiedades básicas, veamos como queda implementada la primer parte de nuestro autocomplete: <form id="form1" runat="server"> <div> <asp:ToolkitScriptManager ID="manager" runat="server" /> <asp:TextBox ID="TextBox1" runat="server"></asp:TextBox> <asp:AutoCompleteExtender ID="AutoCompleteExtender1" runat="server" ServicePath="WebService.asmx" MinimumPrefixLength="1" ServiceMethod="PersonasInfo " EnableCaching="true" TargetControlID="TextBox1" UseContextKey="True" CompletionSetCount="10" CompletionInterval="0"> </asp:AutoCompleteExtender> </div> </form>   Ahora que nuestro código html está completo, es hora de trabajar directamente con nuestro webservice, este deberá contener un método que devuelva una lista o arreglo de datos, los cuales por supuesto, serán traídos desde la base de datos. Antes de implementar este método, debemos asegurarnos de que nuestra clase del webservice tiene habilitados los espacios para ser utilizada [System.Web.Script.Services.ScriptService()] [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] public class WebService : System.Web.Services.WebService {}   Ahora si, nuestro metodo principal [WebMethod()] [System.Web.Script.Services.ScriptMethod()] public string[] PersonasInfo(string prefixText, int count) { string connstring = ConfigurationManager.ConnectionStrings["LocalSqlServer"].ConnectionString;   using (SqlConnection conn = new SqlConnection(connstring)) { SqlCommand comando = new SqlCommand("select nombre from personas where nombre LIKE '%' + @param + '%' ", conn); comando.Parameters.AddWithValue("@param", prefixText); SqlDataReader dr = default(SqlDataReader); comando.Connection.Open(); dr = comando.ExecuteReader(); List<string> items = new List<string>();   while (dr.Read()) { items.Add(dr["nombre"].ToString()); } comando.Connection.Close(); return items.ToArray(); } }   Del método anterior no explicaré en profundidad, pues es bastante sencillo. Una consulta a la base de datos utilizando un datareader y devolviendo los datos en una lista como arreglo. Lo más importante serían las 2 primeras líneas [WebMethod()] y el [ScriptMethod()] las cuales habilitan nuestro método para poder ser accedido y utilizado. Por último, el código de ejemplo en C# (VB Autcomplete):

    Read the article

< Previous Page | 332 333 334 335 336 337 338 339 340 341 342 343  | Next Page >