Search Results

Search found 806 results on 33 pages for 'audience'.

Page 24/33 | < Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >

  • Have You Visited the New Procurement Enhancement Request Community?

    - by LuciaC
    Have you visited the new Procurement Enhancement Request Community yet?  If not, we strongly encourage you to visit this site to vote on current Enhancement Requests (ERs) available through the ‘Quick Preview of Voting List’.  You can also vote on any ER currently displayed.  Have an ER that is not listed?  Simply add it by creating a thread stating the ER and any detailed information you would like to include.  If the ER already exists in the database, we will add the ER # to the thread so that development can provide updates around the requested ERs. This community is your one-stop source for all Enhancement information.  It is being monitored regularly by development and soon we will be posting some updates around some of the top voted Enhancement Requests.  Know that your vote counts!  By voting, you will bring forward those ERs that impact the Procurement Suite's value and usability.  Is your request industry specific?  Let us know by posting this information in the body of the thread.  We have a team monitoring these ERs and will be happy to highlight industry specific ERs to ensure they also get equal visibility! Coming Soon:  A list of the Top implemented ERs!  Development has been working hard to make improvements to the Procurement Suite of Products and they want you to know about them!  Until then, check out the Best Practices Section for some key ERs and how they can help your company secure the most value from your implementation!! What you need to know: The Procurement Enhancement Requests Community is your 1-stop shop for the latest information on Enhancements! The Community allows you to vote on ERs bringing visibility to the collective audience interest in value and usability recommendations. Your place to submit any new enhancement requests. Get the latest on top Procurement Enhancement Requests (ERs) - know when an improvement is PLANNED, COMING SOON, and DELIVERED. This Community is owned and managed by the Oracle Procurement Development team! Let your voice be heard by telling us what you want to see implemented in the Procurement Suite.

    Read the article

  • Developing web sites that imitate desktop apps. How to fight that paradigm? [closed]

    - by user1598390
    Supposse there's a company where web sites/apps are designed to resemble desktop apps. They struggle to add: Splash screens Drop-down menus Tab-pages Pages that don't grow downward with content, context is inside scrollable area so page is of a fixed size, as if resembling the one-screen limitation of desktop apps. Modal windows, pop-ups, etc. Tree views Absolutely no access to content unless you login-first, even with non-sensitive content. After splash screen desapears, you are presented with a login screen. No links - just simulated buttons. Fixed page-size. Cannot open a linked in other tab Print button that prints directly ( not showing printable page so the user can't print via the browser's print command ) Progress bars for loading content even when the browser indicates it with its own animation Fonts and color amulate a desktop app made with Visual Basic, PowerBuilder etc. Every app seems almost as if were made in Visual Basic. They reject this elements: Breadcrumbs Good old underlined links Generated/dynamic navigation, usage-based suggestions Ability to open links in multiple tabs Pagination Printable pages Ability to produce a URL you can save or share that links to an item, like when you send someone the link to an especific StackExchange question. The only URL is the main one. Back button To achieve this, tons of javascript code is needed. Lots and lots of Javascript and Ajax code for things not related with the business but with the necessity to hide/show that button, refresh this listbox, grey-out that label, etc. The coplexity generated by forcing one paradigm into another means most lines of code are dedicated to maintain the illusion of a desktop app. What is the best way to change this mindset, and make them embrace the web, and start producing modern, web apps instead of desktop imitations ? EDIT: These sites are intranet sites. Users hate these apps. They constantly whine about them, but they have to use them to do their daily work. These sites are in-house solutions, the end-users have no choice but to use them. They are a "captive audience". Also, substitution will not happen because of high costs. But at least if that mindset is changed, new developments would be more web-like.

    Read the article

  • How to replicate Lynda.com transitions in my own presentations (Keynote preferred, Powerpoint possib

    - by jdmuys
    I recently noticed that lynda.com uses in their training videos a very interesting transition that I would like to replicate in my own presentations. They are somewhat similar to the computer screen animations in the "Minority Report" movie: they zoom out to a huge sheet, pan, and zoom back in to a new location. It's very good for navigating deep hierarchies. For example when you are at level 1, you can see level 3, but it's tiny and unreadable. You then transition by zooming in to a deeper level that suddenly becomes legible. I can imagine also to use it using a large "mind map" as a common Ariane thread, to dive into details (and back out) while still taking advantage of the visual memory of the audience. It a bit difficult to describe, I hope I managed to convey the idea. You can see them in action for example there (iPhone programming training): http://www.lynda.com/home/DisplayCourse.aspx?lpk2=48369 Then click on "What you should know" in the introduction chapter. See for example in that movie between 22s and 24s, then between 27s and 28s. How do they do that? How could I replicate that effect? TIA

    Read the article

  • Keynote presentation in conjunction with other app?

    - by Sören Kuklau
    The short version: I'd like to tell Apple Keynote to switch to a specific app (never leaving full-screen mode) before a certain slide appears, then display that slide as soon as I switch back. Some more details: I'm going to show off five major improvements in an upcoming release of our app. I want one slide highlighting the feature, then one or two showing some details, perhaps with screenshots. After that, so people get a better impression of what I'm talking about, I'll show it off live — to do this, I have to switch to a VM or remote session (since this is a Windows app). Then, I'd like to switch back and go to the next feature. I.e., it would be similar to Apple's "Demo" slides in a cursive font, except not with a different screen, or different computer. It's this switching back and forth that I want to make smooth, just to wow the audience a little. Can I perhaps do a "special" slide in Keynote that tells it to run an AppleScript? Better yet, to switch to an app, and wait until I switch back, and then automatically advance to the next slide?

    Read the article

  • nginx rewrite regex for API versioning

    - by MSpreij
    What I want is for the first to be turned into the second.. /widget => /widget/index.php /widget/ => /widget/index.php /widget?act=list => /widget/index.php?act=list /widget/?act=list => /widget/index.php?act=list /widget/list => /widget/index.php?act=list /widget/v2?act=list => /widget/v2.php?act=list /widget/v2/?act=list => /widget/v2.php?act=list /widget/v2/list => /widget/v2.php?act=list v2 could also be v45, basically "v\d+" act, "list" in this case, can have many values and more will be added. Any additional query parameters would just be passed on with $args, I guess. Basically URLs not specifying the version will go to index.php, which can then decide what specific version file to include. What I am afraid of happening is loops - this should sit in location /widget { right?. (As for putting the version of the API in the URL, I'm not trying to be RESTful, and target audience is small) Resources on how to do this entirely in index.php using "routers" also welcome :-/

    Read the article

  • regsvr32 fails on Windows Server 2003 - LoadLibaray ("my.dll") failed

    - by John Mo
    The DLL in question is a VB6 web class. When deploying, the old one needs to be unregistered and the new one registered. This has worked for many years but has failed now with the following error in a message box: LoadLibrary("c:...\my.dll") failed - The filename, directory name, or volume label syntax is incorrect. I don't think there's a problem with the DLL being (un)registered because it's failing to unregister the existing DLL that was at one time successfully registered. I don't think there's a filename or path problem. Regsvr32 is executed using a batch file where the full path to the DLL is specified. The path has not changed. The batch file has not changed. The DLL has not changed (at least not the old one I need to unregister). It's a programming problem to me because I need to release an updated DLL, but I'm thinking it's maybe an admin problem best suited to the talents of the serverfault audience. I'm hoping somebody knows about a Windows Update or something that might hose regsvr32. Thank you.

    Read the article

  • Amazon CloudFront and EC2: Global Load Balancing

    - by Matt Rogish
    We have an app that is going to store and serve up a decent amount of data in S3 to a global audience where latency should be minimized. So, we've been doing tests with Amazon CloudFront and have seen favorable results. However, we need a thin middleware layer (to do security etc.) and we'd like to put that in EC2. Due to security restrictions, this middleware layer will do the file streaming from S3/CloudFront: S3/CloudFront - EC2 - Clients We can geographically distribute the EC2 nodes (US East/West, and Ireland) but the problem is that a client in the EU would hit our US server and be fed data from there, thus rendering much of the performance benefit of CloudFront moot. I've been digging through the EC2 docs but I can't find a built-in way to get a geographically distributed version of EC2 a la CloudFront. Elastic Load Balancing sounds like the way to go, but I can't seem to find a way with that to direct based on routing... Preferably, we'd like to keep the amount of stuff outside of EC2/S3/etc. to a minimum (for obvious reasons). Any ideas how to do that within the EC2/S3 framework? DNS/routing tricks? Thanks!

    Read the article

  • Recommendations for hosting large videos

    - by Clinton Blackmore
    I recently created and put a 45-minute, 300 MB video file on my website and told a mailing list about it. Checking my site stats, I see that I've used 20% of my "unlimited" bandwidth for the month. As I want to be able to have several videos like this, clearly, I need to consider other options. The appeal to hosting files as my own site (aside from the supposedly unlimited disk space and bandwidth), is to be able to have control over the format, resolution, and quality of the video(s), as well as to ensure that it is clear that I'm the copyright holder (although the videos will be under a creative commons license). I find that for the screencasts I'm making, having a high resolution (say 3/4 of 1024 * 768) really makes seeing what is going on on the screen easier. It is also always a plus to not have the experience marred by advertisements. One more wrench to throw in is that while the videos are non-commercial, they do promote a club, and it seems that that falls afoul of some terms of services (especially for free services; while free is very nice, I will certainly consider putting up some money.) What recommendations do you have for (fairly) long, high-resolution videos? Should I look in depth at sites like YouTube and Vimeo, should I be considering a filesharing site [I have no qualms with someone downloading the entire video first -- I wouldn't want to watch 45 minutes in my browser!], hosting files with Bittorent (ugh -- I think that'd reduce my audience), or should I be looking into other web hosts (and if so, who?)

    Read the article

  • One Active Directory, Multiple Remote Desktop Services (Server 2012 solution)

    - by Trinitrotoluene
    What I am trying to do is quite complex, so I figured I'd throw it out to a wider audience to see if anyone can find a flaw. What I am trying to do (as an MSP/VAR) is design a solution that will give multiple companies a session based remote desktop (companies that need to be kept completely seperate), using only a handful of servers. This is how I imagine it at the moment: CORE SERVER - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC01 (Active Directory Domain Services for mycloud.local) Server2: Cloud-EX01 (Exchange Server 2010 running multi tenant mode) Server3: Cloud-SG01 (Remote Desktop Gateway) CORE SERVER 2 - Server 2012 Datacentre (All below are HyperV servers) Server1: Cloud-DC02 (Active Directory Domain Services for mycloud.local) Server2: Cloud-TS01 (Remote Desktop Session Host for Company A) Server3: Cloud-TS02 (Remote Desktop Session Host for Company B) Server4: Cloud-TS03 (Remote Desktop Session Host for Company C) What I thought about doing was setting up each Organisation in their own OU (perhaps creating their OU structure based on the Excahnge 2010 tenant OU structure so the accounts are linked). Each company would get a Remote Desktop Session Host server that would also serve as a file server. This server would be seperated from the rest on its own range. The server Cloud-SG01 would have access to all these networks and route the traffic to the appropriate network when a client connects and authenticated so they are pushed onto the correct server (Based on session collections in 2012). I won't lie this is something I have come up with quite quickly so there may well be something gapingly obvious that I am missing. Any feedback would be appreciated.

    Read the article

  • Tridion 2011 SP1 Core Service - expose to live server within PROD env

    - by Neil
    We have a requirement to allow our users to submit information about their "projects" - a small piece of text and single image they upload. Ultimately we'll have a listing page of user contributed projects that others can comment on and rate. We've decided to user Tridion's UGC for rating & comments site-wide for this first phase which has got me thinking - UGC is tied to Tridion published pages & components, if we want UGC on our user-submitted projects, they'll have to be created within Tridion as components themselves, not be sat in some custom db table? Is this where the Core Service could come in? My understanding is that the CD Web Service is for retrieval, not for interacting with the Content Manager. Is it OK (!) architecturally to expose the Core Service only to our live application servers so our backend .NET code can create "project components" that can be then be published by editors allowing them to be commented on? Everything sounds pretty neat and tidy apart from the "exposing Core Service to live servers" bit. Without this though I'd have to write a custom way to "transfer" it back over to the Content Manager - maybe like Audience Manager Sync works? Anyone done this before?

    Read the article

  • MongoDB on 128mb 32-bit VPS (plus Tornado and Redis)

    - by apito
    i am curious about how mongodb will perform in a limited vps. specifically, i'll deploy this configuration on 32-bit ubuntu 9.04 server with 128Mb memory (UPDATE: now i'm considering 360mb too). nginx and redis three instances of tornado apps (one is for mobile site; limited app, not my primary audience); has around 8 Collections. social webapp for my community. mongodb all beside mongodb seems to have small footprint. memory-mapping-wise, i dont know how mongodb will behave. i know it's a little bit a stretch to use this kind of config on a tiny vps, but that's what i can afford for now. i expect to have.. hmm.. maybe ~50 15rps. i did my homework doing a lot of frontend optimizations and yslow says grade A 91 (ruleset V2) :-) anyone willing to share experiences? eg. how big the data set size when mongo hit the ceiling, performance when mongo do a lot of disk IO, etc. thanks. UPDATE: this is my pet project. i'll get back to you when i have next spare time to do same httperf in a vbox with exact spec. suggestion how to do stress testing welcomed. i'm new to this kind of stuff.

    Read the article

  • Best way to build / implement a corporate developer Linux distro with multiple kernels?

    - by Garen
    At work we have Linux users who understandably prefer using Ubuntu. Problem is, we also have developer tools that only work with 'officially' supported Linux distributions that use much older 2.6.18 based kernels. (And even if they worked with newer ones, the vendors could always say they won't "support" the software unless it's on one of their 'officially' supported platforms.) We could of course just tell them to use CentOS or something else 2.6.18-based, and I'm sure their response would be something like: "you can take Ubuntu from our cold, dead hands." :) Which brings to me some questions--is there any good/easy/recommended way to run something like Ubuntu as a host VM and Centos 5.x as a guest OS (with which system--Xen,KVM,VMWare, ...?), and then roll that into our own custom internal distribution that could be easily installed? KVM looks like a good high-performance option just recently included in RHEL 5.4, but if hardware support for virtualization like Intel-VT or AMD-V is necessary, then I'd guess only those folks with fairly new PCs will be able to do it. Would be very interested to hear how anyone else has addressed this kind issue. EDIT: The target audience / users of this kind of system would be developers, each one needs to run locally licensed commercial software, so building out some separate beefy central machines isn't an option unfortunately due to license restrictions. Even if that weren't the case, a couple developers could quickly eat up the resources with parallel builds. :) Ideally, I was hoping there was some step-by-step guide out there to build your own pre-built distribution that had e.g. CentOS 5.x and Ubuntu Desktop as a guest.

    Read the article

  • How to safely send newsletters on VPS (SMTP) w/ non-hosted domain as "From" email?

    - by Andy M
    Greetings, I'm trying to understand the safest way to use SMTP. I'm considering purchasing a second virtual server mainly for email sending, on which I will set up PHPlist (a free open-source mailing program), so we have the freedom to send unlimited newsletters (...well, 10,000 per day at least, which requires a VPS rather than shared hosting). Here's my current setup with a paid mass-mailing software: I have a website - let's call it MyHostedDomain.org. I send newsletters with the From / Reply To address as [email protected], which isn't being hosting by me but I have access to the email account. Can I more or less safely set this up with an SMTP server on a VPS? i.e. send messages using [email protected] as the visible address, but having it all go through my VPS SMTP? I cannot authenticate it, right? Is this too risky a practice? Is my only hope to use an address with a domain on the VPS, i.e. [email protected]? I already have a Reverse DNS record for the domain hosted on my current VPS. I also see other suggestions, like SenderID and DKIM. But with all these things combined, will this still work? I don't want to get blacklisted, but the good thing is this is a somewhat private list, and users opt-in to subscribe. So it's a self-made audience. (If it makes you feel better, this is related to a non-profit activity, not some marketing scam...it's for a good cause, I assure you!)

    Read the article

  • Why still use JPG compression? [closed]

    - by Torben Gundtofte-Bruun
    Back when the JPG image format was introduced, it made a lot of sense to reduce the file size, even accepting a loss in image quality, because files were being downloaded over a slow and expensive modem connection. In today's world, file size is no longer a concern, at least not regarding JPG where it seems silly to save 45kB on a photo. But my image editing apps still prompt me for the desired compression level when I save a file. Does it still make sense to go with the default 85? Why should I not crank it up to 100 for all files? Update based on comments: For web work, I might use PNG instead. But every smartphone and camera produces JPG files. The question arises when I save these edits. Audience is my own harddisk. We're talking photos, 2-5MB apiece. Chroma, subsampling, DCT - sorry, never heard of it. I'm a home user, not Photoshop guru. For the record, I use Paint Shop Pro on Win, and Gimp on Linux.

    Read the article

  • Allow more websocket connections

    - by Switz
    I want to load balance my node.js (DerbyJS to be specific) application on a basic Linode (512MB ram). It can probably take more than one process running at once. The querys/database does not concern me as I'm not doing anything intensive. The problem at the moment is that it can only handle up to ~40 websocket connections at once. I would love if I could get that number in the few hundred+ range. I anticipate a lot of traffic on launch due to the fact that it's a highly niche community with an engaged audience, but after it should be fine with just ~20-40 connections at once, which it handles perfectly as of now. I don't mind spending a bit of money for a week or two worth of running, but I also don't want to switch production environments. How can I test the process to see how many instances I am able to run on the box? Will increasing the number of processes increase the amount of websockets I can handle, or is that a limitation of the server's network? I have an old Macbook Pro running Linux sitting next to me that has 2GB ram and a 2.8 Dual Core Processor. Could I use this to handle some of the extra load? I could probably load balance with nginx to its IP. I'm on a FiOS home network. If you have any suggestions, I'd really appreciate it. Thanks

    Read the article

  • What’s New from the Oracle Marketing Cloud at Oracle OpenWorld 2014?

    - by Richard Lefebvre
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Marketing—CX Central is your hub for all things Marketing related at OpenWorld in San Francisco, September 28-October 2, 2014. Learn how to personalize the modern marketing journey to improve customer loyalty. We’re hosting more than 60 breakout sessions, half of which will highlight customer success stories from marquee brands including Bizo, Comcast, Dell, Epson, John Deere, Lane Bryant, ReadyTalk and Shutterfly. Moscone West, Levels 2 and 3 To learn more about how modern marketing works, visit Moscone West, levels 2 and 3, for exciting demos of each of the Oracle Marketing Cloud solutions (BlueKai, Compendium, Eloqua, Push I/O, and Responsys). You also can check out our stations for Vertical Marketing Best Practices, the Markie Awards, and more! CX Spotlight Sessions “Accelerating Big Profits in Big Data,” Jeff Tanner, Baylor University “Using Content Marketing to Impact Every Stage of the Buyer’s Journey,” Jennifer Agustin, Bizo “Expanding Your Marketing with Proven Testing and Optimization,” Brian Border, Shutterfly and Matthew Balthazor, Epson “Modern Marketing: The New Digital Dialogue,” Cory Treffiletti, Oracle A Special Marquee Session Dell’s Hayden Mugford will speak on “The Digital Ecosystem: Driving Experience Through Contact Engagement.” She will highlight how the organization built a digital ecosystem that supports a behaviorally driven, multivehicle nurturing campaign. The Dell 1:1 Global Marketing team worked with multiple partners to innovate integrations with Oracle Eloqua, Oracle Real-Time Decisions for real-time decision logic, and a content management system (CMS) that enables 100 percent customized e-mails. The program doubled average order values for nurtured contacts versus non-nurtured and tripled open and click-through rates versus push e-mail. Other Oracle Marketing Cloud Session Highlights Thought leadership by role Exploring the benefits of moving to the Cloud Product line roadmaps and innovations in Marketing Technical deep dives for product lines within Marketing Best practices and impactful business measurements Solutions that are Integrated across CX Target Audience Session content is geared toward professionals in Marketing, Marketing Operations, Marketing Demand Generation, Social: Chief Marketing Officers, Vice Presidents, Directors and Managers. Outcomes Customers attending Marketing—CX Central @ OpenWorld will be able to: Gain insight into delivering consistent cross-channel marketing Discover how to provide the right information to the right customer at the right time and with the right channel Get answers to burning questions and advice on business challenges Hear from other Oracle customers about recommended best practices to help their organization move forward Network and share ideas to help create a strategy for connecting with customers in better ways It Wouldn’t Be an Oracle Marketing Cloud Event Without a Party! We’re hosting CX Central Fest:  a unique customer experience specifically designed for attendees of CX Central. It will include a chance to rock out at a private concert featuring Los Angeles indie electronic pop group, Capital Cities! Join us Tuesday, September 30 from 7-9 p.m. OpenWorld is a fabulous way for your customers to see all that Oracle Marketing Cloud has to offer. Pass on an invitation today. By Laura Vogel (Oracle) /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Which programming idiom to choose for this open source library?

    - by Walkman
    I have an interesting question about which programming idiom is easier to use for beginner developers writing concrete file parsing classes. I'm developing an open source library, which one of the main functionality is to parse plain text files and get structured information from them. All of the files contains the same kind of information, but can be in different formats like XML, plain text (each of them is structured differently), etc. There are a common set of information pieces which is the same in all (e.g. player names, table names, some id numbers) There are formats which are very similar to each other, so it's possible to define a common Base class for them to facilitate concrete format parser implementations. So I can clearly define base classes like SplittablePlainTextFormat, XMLFormat, SeparateSummaryFormat, etc. Each of them hints the kind of structure they aim to parse. All of the concrete classes should have the same information pieces, no matter what. To be useful at all, this library needs to define at least 30-40 of these parsers. A couple of them are more important than others (obviously the more popular formats). Now my question is, which is the best programming idiom to choose to facilitate the development of these concrete classes? Let me explain: I think imperative programming is easy to follow even for beginners, because the flow is fixed, the statements just come one after another. Right now, I have this: class SplittableBaseFormat: def parse(self): "Parses the body of the hand history, but first parse header if not yet parsed." if not self.header_parsed: self.parse_header() self._parse_table() self._parse_players() self._parse_button() self._parse_hero() self._parse_preflop() self._parse_street('flop') self._parse_street('turn') self._parse_street('river') self._parse_showdown() self._parse_pot() self._parse_board() self._parse_winners() self._parse_extra() self.parsed = True So the concrete parser need to define these methods in order in any way they want. Easy to follow, but takes longer to implement each individual concrete parser. So what about declarative? In this case Base classes (like SplittableFormat and XMLFormat) would do the heavy lifting based on regex and line/node number declarations in the concrete class, and concrete classes have no code at all, just line numbers and regexes, maybe other kind of rules. Like this: class SplittableFormat: def parse_table(): "Parses TABLE_REGEX and get information" # set attributes here def parse_players(): "parses PLAYER_REGEX and get information" # set attributes here class SpecificFormat1(SplittableFormat): TABLE_REGEX = re.compile('^(?P<table_name>.*) other info \d* etc') TABLE_LINE = 1 PLAYER_REGEX = re.compile('^Player \d: (?P<player_name>.*) has (.*) in chips.') PLAYER_LINE = 16 class SpecificFormat2(SplittableFormat): TABLE_REGEX = re.compile(r'^Tournament #(\d*) (?P<table_name>.*) other info2 \d* etc') TABLE_LINE = 2 PLAYER_REGEX = re.compile(r'^Seat \d: (?P<player_name>.*) has a stack of (\d*)') PLAYER_LINE = 14 So if I want to make it possible for non-developers to write these classes the way to go seems to be the declarative way, however, I'm almost certain I can't eliminate the declarations of regexes, which clearly needs (senior :D) programmers, so should I care about this at all? Do you think it matters to choose one over another or doesn't matter at all? Maybe if somebody wants to work on this project, they will, if not, no matter which idiom I choose. Can I "convert" non-programmers to help developing these? What are your observations? Other considerations: Imperative will allow any kind of work; there is a simple flow, which they can follow but inside that, they can do whatever they want. It would be harder to force a common interface with imperative because of this arbitrary implementations. Declarative will be much more rigid, which is a bad thing, because formats might change over time without any notice. Declarative will be harder for me to develop and takes longer time. Imperative is already ready to release. I hope a nice discussion will happen in this thread about programming idioms regarding which to use when, which is better for open source projects with different scenarios, which is better for wide range of developer skills. TL; DR: Parsing different file formats (plain text, XML) They contains same kind of information Target audience: non-developers, beginners Regex probably cannot be avoided 30-40 concrete parser classes needed Facilitate coding these concrete classes Which idiom is better?

    Read the article

  • SQLAuthority News – SQL Server Technology Evangelists and Evangelism

    - by pinaldave
    This is the exact conversation that I had with three people during the recent SQL Server Public Training. Person 1: “Are you an SQL Server Evangelist?” Pinal : “No, but Vinod Kumar is.” Person 1: “Who are you?” Person 2: “He is Pinal, haha!” Person 1: “I know that, but don’t you evangelize SQL Server Technology?” Pinal : “Hmm… I do that…” Person 1: “In that case, why don’t you call yourself an Evangelist?” Pinal : “…! …” Person 2: “Good Question! Who are you Pinal?” Pinal : “I think you are asking my title, is that correct?” Person 1: “Maybe.” Pinal : “I am a Mentor, and I work for Solid Quality Mentors.” Person 2: “I have seen you listing yourself as the Founder of SQLAuthority.com… so…” Pinal : “Yeah that’s true.” Person 3: “Let me summarize what these people are asking. What they are asking is that you can have multiple titles, so is being an evangelist one of your titles or not?” Pinal : “Well, I am an SQL Server MVP and lots of people say that we are also evangelists of technology. In fact,  we are all evangelists of technology, aren’t we?” Person 1: “So let me come back to my original topic: If you are an SQL Server Evangelist, then what is this evangelism?” Person 2: “And who is Vinod Kumar – I have heard about him a lot.” Pinal : “Oh okay. Now I got it. Let me explain …” The answer was quite long but since this conversation, I have been thinking about the words “evangelist” and “evangelism.” I think being an evangelist is one of the most respected jobs in the world and to do this job one must bear lots of responsibilities. There were two questions asked to me, so let me answer both one by one. Who is Vinod Kumar? Vinod Kumar is a Technology Evangelist for Microsoft and one of the most respected persons in the SQL Server Community in India. Let me copy-paste my note from the previous TechEd India 2010 article. “I attended 2 sessions of Vinod Kumar. Vinod is a natural storyteller so there was no doubt that his sessions would be jam-packed. People attended his sessions simply because Vinod was the best speaker in the event. He did not have a single time that disappointed audience; he is truly a good speaker. He knows his stuff very well. I personally do not think that in India he can be compared to anyone for SQL.” Pinal Dave and Vinod Kumar What is Technology Evangelism? Here I am listing three posts written by Vinod Kumar, wherein he talks about Technology Evangelism and Technology Evangelist in an in-depth manner. They are highly-regarded articles in the Community. Evangelism beyond boundaries with an Evangelists !!! Technology Evangelism Demystified New face of Online Technology Evangelism I strongly recommend reading them all. These are wonderful blog posts. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: About Me, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • SQLAuthority News – Professional Development and Community

    - by pinaldave
    I was recently invited by Hyderabad Techies to deliver a keynote for their 16-day online session called TECH THUNDERS. This event has been running from May 15 and will continue up to the end of the month May 30). There would be a total of 30 sessions. In every evening of those 16 day, there will be either one or two sessions from several noted industry experts. It is the same group which has received the Microsoft Community Impact Award as the Best User Group in India as for developers. I have never talked about Professional Development before. Even if this was my first time to do so, I still accepted the wonderful challenge for the sake of the thousands of audience who were expected to attend this online event. Time is of the essence; I had 15 minutes to deliver the keynote and open the event. The reason why I was nervous was because I had to cover precisely only 15 minutes- no more, no less. If I had an hour, I would have been very confident because I knew I could do a good job for sure. However, I still needed to open the event as great as it can be even if the time was short. I finally created a 6-slide small presentation. In reality, there were only two pages which had the main contents of my keynote, and the remaining slides were just wrappers and decors. You can download the complete slide deck from here. The image used in the slide deck is a curtsy of blog reader Roger Smith who sent it to me. The slide in which I spent a good amount of time is the slide which talks about Professional Development. The content of the slide is as follows: Today, Technology and You Keep your eyes, ears and senses open – Stay Active! You are not the first one who faced the problem – Search Online! Learn the web – Blogs, Forums and Friends! Trust the Technology, Not Print – Test Everything! Community and You! I had a very little time creating the slide deck as I was busy the whole day doing the Advanced SQL Server Training. I had put together these slides during the tea/coffee break of my session. Though it was just a six-bullet point, I had received quite a few emails right after keynote requesting me to talk more about this subject and share the details of my slide deck. I have talked with the event organizer and he will put the keynote online very soon. The subject of the talk is very simple; it revolves around the community. Time has changed, and Internet has come a long way from where it was many years ago. Now that we are all connected, help via the Internet and useful software is easily available around us. In fact, RSS, Newletters and few other technologies have progressed so much that the help through news is now being delivered to our door steps, instead of going out and seeking them. Sometimes, a simple search online solves a lot of problems of many developers. The community is now the first stop for any developer when he or she needs help or just wants to hang around and share some thoughts. I strongly suggest everybody to be a part of the Tech Community. Be it online, offline community or just a local user group, I strongly advise all of you to get involved. I am active in the Community, and I must say I recommend getting drawn into it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL User Group, SQLAuthority News, T SQL, Technology Tagged: Community

    Read the article

  • GPGPU

    WhatGPU obviously stands for Graphics Processing Unit (the silicon powering the display you are using to read this blog post). The extra GP in front of that stands for General Purpose computing.So, altogether GPGPU refers to computing we can perform on GPU for purposes beyond just drawing on the screen. In effect, we can use a GPGPU a bit like we already use a CPU: to perform some calculation (that doesn’t have to have any visual element to it). The attraction is that a GPGPU can be orders of magnitude faster than a CPU.WhyWhen I was at the SuperComputing conference in Portland last November, GPGPUs were all the rage. A quick online search reveals many articles introducing the GPGPU topic. I'll just share 3 here: pcper (ignoring all pages except the first, it is a good consumer perspective), gizmodo (nice take using mostly layman terms) and vizworld (answering the question on "what's the big deal").The GPGPU programming paradigm (from a high level) is simple: in your CPU program you define functions (aka kernels) that take some input, can perform the costly operation and return the output. The kernels are the things that execute on the GPGPU leveraging its power (and hence execute faster than what they could on the CPU) while the host CPU program waits for the results or asynchronously performs other tasks.However, GPGPUs have different characteristics to CPUs which means they are suitable only for certain classes of problem (i.e. data parallel algorithms) and not for others (e.g. algorithms with branching or recursion or other complex flow control). You also pay a high cost for transferring the input data from the CPU to the GPU (and vice versa the results back to the CPU), so the computation itself has to be long enough to justify the overhead transfer costs. If your problem space fits the criteria then you probably want to check out this technology.HowSo where can you get a graphics card to start playing with all this? At the time of writing, the two main vendors ATI (owned by AMD) and NVIDIA are the obvious players in this industry. You can read about GPGPU on this AMD page and also on this NVIDIA page. NVIDIA's website also has a free chapter on the topic from the "GPU Gems" book: A Toolkit for Computation on GPUs.If you followed the links above, then you've already come across some of the choices of programming models that are available today. Essentially, AMD is offering their ATI Stream technology accessible via a language they call Brook+; NVIDIA offers their CUDA platform which is accessible from CUDA C. Choosing either of those locks you into the GPU vendor and hence your code cannot run on systems with cards from the other vendor (e.g. imagine if your CPU code would run on Intel chips but not AMD chips). Having said that, both vendors plan to support a new emerging standard called OpenCL, which theoretically means your kernels can execute on any GPU that supports it. To learn more about all of these there is a website: gpgpu.org. The caveat about that site is that (currently) it completely ignores the Microsoft approach, which I touch on next.On Windows, there is already a cross-GPU-vendor way of programming GPUs and that is the DirectX API. Specifically, on Windows Vista and Windows 7, the DirectX 11 API offers a dedicated subset of the API for GPGPU programming: DirectCompute. You use this API on the CPU side, to set up and execute the kernels that run on the GPU. The kernels are written in a language called HLSL (High Level Shader Language). You can use DirectCompute with HLSL to write a "compute shader", which is the term DirectX uses for what I've been referring to in this post as a "kernel". For a comprehensive collection of links about this (including tutorials, videos and samples) please see my blog post: DirectCompute.Note that there are many efforts to build even higher level languages on top of DirectX that aim to expose GPGPU programming to a wider audience by making it as easy as today's mainstream programming models. I'll mention here just two of those efforts: Accelerator from MSR and Brahma by Ananth. Comments about this post welcome at the original blog.

    Read the article

  • Travelling MVP #1: Visit to SharePoint User Group Finland

    - by DigiMortal
    My first self organized trip this autumn was visit to SharePoint User Group Finland community evening. As active community leaders who make things like these possible they are worth mentioning and on spug.fi side there was Jussi Roine the one who invited me. Here is my short review about my trip to Helsinki. User group meeting As Helsinki is near Tallinn I went there using ship. It was easy to get from sea port to venue and I had also some minutes of time to visit academic book store. Community evening was held on the ground floor of one city center hotel and room was conveniently located near hotel bar and restaurant. Here is the meeting schedule: Welcome (Jussi Roine) OpenText application archiving and governance for SharePoint (Bernd Hennicke, OpenText) Using advanced C# features in SharePoint development (Alexey Sadomov, NED Consulting) Optimizing public-facing internet sites for SharePoint (Gunnar Peipman) After meeting, of course, local dudes doesn’t walk away but continue with some beers and discussion. Sessions After welcome words by Jussi there was session by Bernd Hennicke who spoke about OpenText. His session covered OpenText history and current moment. After this introduction he spoke about OpenText products for SharePoint and gave the audience good overview about where their SharePoint extensions fit in big picture. I usually don’t like those vendors sessions but this one was good. I mean vendor dudes were not aggressively selling something. They were way different – kind people who introduced their stuff and later answered questions. They acted like good guests. Second speaker was Alexey Sadomov who is working on SharePoint development projects. He introduced some ways how to get over some limitations of SharePoint. I don’t go here deeply with his session but it’s worth to mention that this session was strong one. It is not rear case when developers have to make nasty hacks to SharePoint. I mean really nasty hacks. Often these hacks are long blocks of code that uses terrible techniques to achieve the result. Alexey introduced some very much civilized ways about how to apply hacks. Alex Sadomov, SharePoint MVP, speaking about SharePoint coding tips and tricks on C# I spoke about how I optimized caching of Estonian Microsoft community portal that runs on SharePoint Server and that uses publishing infrastructure. I made no actual demos on SharePoint because I wanted to focus on optimizing process and share some experiences about how to get caches optimized and how to measure caches. Networking After official part there was time to talk and discuss with people. Finns are cool – they have beers and they are glad. It was not big community event but people were like one good family. Developers there work often for big companies and it was very interesting to me to hear about their experiences with SharePoint. One thing was a little bit surprising for me – SharePoint guys in Finland are talking actively also about Office 365 and online SharePoint. It doesn’t happen often here in Estonia. I had to leave a little bit 21:00 to get to my ship back to Tallinn. I am sure spug.fi dudes continued nice evening and they had at least same good time as I did. Do I want to go back to Finland and meet these guys again? Yes, sure, let’s do it again! :)

    Read the article

  • SQL SERVER – What is a Technology Evangelist?

    - by pinaldave
    When you hear that someone is an “evangelist” the first thing that might pop into your mind is the Christian church.  In fact, the term did come from Christianity, and basically means someone who spreads the news about their faith.  In the technology world, the same definition is true. Technology evangelists are individuals who, professionally or in their spare time, spread the news about the latest new products.  Sounds like a salesperson, right?  No they are absolutely different. Salespeople also keep up to date with a large number of people, and like to convince others to buy their product – and some will go to any lengths to sell!  An evangelist, on the other hand, is brutally honest about the product, even if sometimes it means not making a sale.  An evangelist is out there to tell the TRUTH.  A salesperson needs to make sales. An Evangelist offers a Solution independent of Technology used – a Salesperson offers Particular Technology. With this definition in mind, you can probably think of a few technology evangelists you already know.  Maybe it’s a relative or a neighbor, someone who loves keeping up with the latest trends and is always willing to tell you about them if you ask even the simplest question.  And, in fact, they probably are evangelists and don’t even know it.  For a long time, the work of technology evangelism was in the hands of community and community technology leaders. Luckily now various organizations have understood the importance of the community and helping community to reach their goals. This has lead them to create role of “Technology Evangelists”. Let me talk about one of the most famous Evangelist of the SQL Server technology. Technology Evangelist only belongs to technology and above any country, race, location or any other thing. They are dedicated to the technology. Vinod Kumar is such a man, who have given a lot to community. For years he was a Technology Evangelist for Microsoft, and maintained a blog that was dedicated to spreading his enthusiasm for his favorite products.  He is one of the most respected Evangelists in the field, and has done a lot of work to define the job for other professionals. Vinod’s career has since progressed to the Microsoft Technology Center (read his post), but he is continuing to be a strong presence in the evangelism community.  I have a lot of respect for Vinod.  He has done a lot for the community and technology evangelism.  Everybody has dream to serve community the way he does, and he is a great role model for evangelists everywhere. On his blog, Vinod created one of the best descriptions of a Technology Evangelist.  It defined the position and also made the distinction between evangelist and salesperson extremely clear.  I will include the highlights of that list here, because no one can say it better than Vinod: Bundle of energy – Passion is their middle name Wonderful Story tellers Empathy, Trust, Loyalty, Openness, Accessibility and Warmth Technology Enthusiast – Doers Love people, people and more people – Community oriented Unique Style and Leadership qualities !!! Self-Confident, Self-Motivated but a student (To read the full list, see: Evangelism Beyond Borders with Evangelists) His blog is a must-read for anyone interested in technology evangelism as a career or simply a hobby.  His advice about how to gain an audience and become a trusted advisor is the best in the business. I think there is an evangelist in everyone. I, too, consider myself a technology evangelist.  Regular readers of this blog will recognize that I am dedicated to bringing information to the masses, and that I pride myself on being both brutally and honest and giving every product fair consideration. I think there is no better way of saying following subject. “Once an Evangelist – Always an Evangelist!” Reference: Pinal Dave (http://blog.SQLAuthority.com)     Filed under: About Me, Database, MVP, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: Evangelist

    Read the article

  • IntelliTrace As a Learning Tool for MVC2 in a VS2010 Project

    - by Sam Abraham
    IntelliTrace is a new feature in Visual Studio 2010 Ultimate Edition. I see this valuable tool as a “Program Execution Recorder” that captures information about events and calls taking place as soon as we hit the VS2010 play (Start Debugging) button or the F5 key. Many online resources already discuss IntelliTrace and the benefit it brings to both developers and testers alike so I see no value of just repeating this information.  In this brief blog entry, I would like to share with you how I will be using IntelliTrace in my upcoming talk at the Ft Lauderdale ArcSig .Net User Group Meeting on April 20th 2010 (check http://www.fladotnet.com for more information), as a learning tool to demonstrate the internals of the lifecycle of an MVC2 application.  I will also be providing some helpful links that cover IntelliTrace in more detail at the end of my article for reference. IntelliTrace is setup by default to only capture execution events. Microsoft did such a great job on optimizing its recording process that I haven’t even felt the slightest performance hit with IntelliTrace running as I was debugging my solutions and projects.  For my purposes here however, I needed to capture more information beyond execution events, so I turned on the option for capturing calls in addition to events as shown in Figures 1 and 2. Changing capture options will require us to stop our debugging session and start over for the new settings to take place. Figure 1 – Access IntelliTrace options via the Tools->Options menu items Figure 2 – Change IntelliTrace Options to capture call information as well as events Notice the warning with regards to potentially degrading performance when selecting to capture call information in addition to the default events-only setting. I have found this warning to be sure true. My subsequent tests showed slowness in page load times compared to rendering those same exact pages with the “event-only” option selected. Execution recording is auto-started along with the new debugging session of our project. At this point, we can simply interact with the application and continue executing normally until we decide to “playback” the code we have executed so far.  For code replay, first step is to “break” the current execution as show in Figure 3.   Figure 3 – Break to replay recording A few tries later, I found a good process to quickly find and demonstrate the MVC2 page lifecycle. First-off, we start with the event view as shown in Figure 4 until we find an interesting event that needs further studying.  Figure 4 – Going through IntelliTrace’s events and picking as specific entry of interest We now can, for instance, study how the highlighted HTTP GET request is being handled, by clicking on the “Calls View” for that particular event. Notice that IntelliTrace shows us all calls that took place in servicing that GET request. Double clicking on any call takes us to a more granular view of the call stack within that clicked call, up until getting to a specific line of code where we can do a line-by-line replay of the execution from that point onwards using F10 or F11 just like our typical good old VS2008 debugging helped us accomplish. Figure 5 – switching to call view on an event of interest Figure 6 – Double clicking on call shows a more granular view of the call stack. In conclusion, the introduction of IntelliTrace as a new addition to the VS developers’ tool arsenal enhances development and debugging experience and effectively tackles the “no-repro” problem. It will also hopefully enhance my audience’s experience listening to me speaking about  an MVC2 page lifecycle which I can now easily visually demonstrate, thereby improving the probability of keeping everybody awake a little longer. IntelliTrace References: http://msdn.microsoft.com/en-us/magazine/ee336126.aspx http://msdn.microsoft.com/en-us/library/dd264944(VS.100).aspx

    Read the article

  • Manage Your WordPress Blog Comments from Your Windows Desktop

    - by Matthew Guay
    Are you never more than a few steps away from your PC and want to keep up with comments on your blog?  Then here’s how you can stay on top of your WordPress comments right from your desktop. Wp-comment-notifier is a small free app for Windows that lets you easily view, approve, reply to, and delete comments from your WordPress blog.  Whether you have a free WordPress.com blog or are running WordPress on your own server, this tool can keep you connected to your comments.  Unfortunately it only lets you manage comments at one blog, so if manage multiple WordPress-powered sites you may find this a downside.  Otherwise, it works great and helps you stay on top of the conversation at your blog. Get notified with wp-comment-notifier Download the wp-comment-notifier (link below) and install as usual. Run it once it’s installed.  Enter your blog address, username, and password when prompted. Wp-comment-notifier will automatically setup your account and download recent comments. Finally, enter your blog’s name, and click Finish. Review Comments with wp-comment-notifier You can now review your comments directly by double-clicking the new WordPress icon in your system tray.  The window has 3 tabs…comments, pending, and spam.  Select a comment to reply, edit, spam, or delete it directly from your desktop. If you select Edit, then you can edit the HTML of the comment (including links) directly from within the notifier. You can approve or permanently delete any spam messages that are caught by your blog’s spam filter. Whenever new comments come in, you’ll see a tray popup letting you know how many comments are waiting to be approved or are in the spam folder.  Click the popup to open the editor. Now, you can directly approve that pending comment without going to your WordPress admin page.  When you’re done, just press Enter on your Keyboard to post the reply. Or, if you want to reply to the comment, click the reply link and enter your comment in the entry box at the bottom. If you ever want to double-check if there’s any new comments, just right-click on the tray icon and select refresh. Finally, you can change the settings from the Configuration link in the tray button or by clicking the gear button on the bottom of the review window.  You can change how often it checks for new comments, not to start the notifier at system startup, and edit your account information. Conclusion Whether you’re managing your personal blog or administer a site with millions of hits per day, staying on top of the conversation is one of the best ways to build and maintain your audience.  With wp-comment-notifier, you can be sure that you’re always in control of your blogs comments.  This app is especially useful if you review all comments before allowing them to be published. Download wp-comment-notifier Similar Articles Productive Geek Tips How-To Geek SoftwareHow-To Geek Software: WordPress Comment Moderation NotifierSave Time Commenting with Pre-Fill Comments Greasemonkey ScriptAdd Social Bookmarking (Digg This!) Links to your Wordpress BlogTools to Help Post Content On Your WordPress Blog TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips VMware Workstation 7 Acronis Online Backup DVDFab 6 Revo Uninstaller Pro Error Goblin Explains Windows Error Codes Twelve must-have Google Chrome plugins Cool Looking Skins for Windows Media Player 12 Move the Mouse Pointer With Your Face Movement Using eViacam Boot Windows Faster With Boot Performance Diagnostics Create Ringtones For Your Android Phone With RingDroid

    Read the article

  • Ann Arbor Day of .NET 2010 Recap

    - by PSteele
    Had a great time at the Ann Arbor Day of .NET on Saturday.  Lots of great speakers and topics.  And chance to meet up with friends you usually only communicate with via email/twitter. My Presentation I presented "Getting up to speed with C# 3.5 — Just in time for 4.0!".  There's still a lot of devs that are either stuck in .NET 2.0 or just now moving to .NET 3.5.  This presentation gave highlights of a lot of the key features of 3.5.  I had great questions from the audience.  Afterwards, I talked with a few people who are just now getting in to 3.5 and they told me they had a lot of "A HA!" moments when something I said finally clicked and made sense from a code sample they had seen on the web.  Thanks to all who attended! A few people have asked me for the slides and demo.  The slides were nothing more than a table of contents.  90% of the presentation was spent inside Visual Studio demo'ing new techniques.  However, I have included it in the ZIP file with the sample solution.  You can download it here. Dennis Burton on MongoDB I caught Dennis Burton's presentation on MongoDB.  I was really interested in this one as I've missed the last few times Dennis had given it to local user groups.  It was very informative and I want to spend some time learning more about MongoDB.  I'm still an old-school relational guy, but I'm willing to investigate alternatives. Brian Genisio on Prism Since I'm not a Silverlight/WPF guy (yet), I wasn't sure this would interest me.  But I talked with Brian for a couple of minutes before the presentation and he convinced me to catch it.  And I'm glad he did.  Prism looks like a very nice framework for "composable UI's" in Silverlight and WPF.  I like the whole "dependency injection" feel to it.  Nice job Brian! GiveCamp Planning I spent some time Saturday working on things for the upcoming GiveCamp (which is why I only caught a few sessions).  Ann Arbor's Day of .NET and GiveCamp have both been held at Washtenaw Community College so I took some time (along with fellow GiveCamp planners Mike Eaton and John Hopkins) to check out the new location for Ann Arbor GiveCamp this year! In the past, WCC has let us use the Business Education (BE) building for our GiveCamp's.  But this year, they're moving us over to the Morris Lawrence (ML) building.  Let me tell you – this is a step UP!  In the BE building, we were spread across two floors and spread out into classrooms.  Plus, our opening and closing ceremonies were held in the Liberal Arts (LA) building – a bit of a walk from the BE building. In the ML building, we're together for the whole weekend.  We've got a large open area (which can be sectioned off if needed) for everyone to work in:   Right next to that, we have a large area where we can set up tables and eat.  And it helps that we have a wonderful view while eating (yes, that's a lake out there with a fountain): The ML building also has showers (which we'll have access to!) and it's own auditorium for our opening and closing ceremonies. All in all, this year's GiveCamp will be great! Stay tuned to the Ann Arbor GiveCamp website.  We'll be looking for volunteers (devs, designers, PM's, etc…) soon! Technorati Tags: .NET,Day of .NET,GiveCamp,MongoDB,Prism

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28 29 30 31  | Next Page >