Search Results

Search found 43173 results on 1727 pages for 'readers question'.

Page 223/1727 | < Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >

  • Bulk Rename Tool is a Lightweight but Powerful File Renaming Tool

    - by Jason Fitzpatrick
    There’s no need to settle for overly simplistic file renaming tools as long as Bulk Rename Tool is around. It’s lightweight, insanely customizable, portable, and sure to make short work of any renaming task you throw at it. Bulk Rename Tool is a great portable application (available as an installed version if you crave context menu integration) that blasts through file renaming tasks. The main panel is intimidatingly packed with toggles and variables you can alter; this isn’t a one-click solution by any means. That said, once you get comfortable using the interface it’s lightening fast and extremely flexible. One tip that will save you an enormous amount of frustrating when you get started: make sure to highlight the files you want to change in the file preview window (located in the upper right corner) or else you won’t see the preview and won’t know if the changes you’re making in the control panel are yielding the file names you desire. Hit up the link below to read more and grab a copy; Bulk Rename Tool is free, Windows only. Bulk Rename Tool Latest Features How-To Geek ETC How To Make Disposable Sleeves for Your In-Ear Monitors Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Bring the Grid to Your Desktop with the TRON Legacy Theme for Windows 7 The Dark Knight and Team Fortress 2 Mashup Movie Trailer [Video] Dirt Cheap DSLR Viewfinder Improves Outdoor DSLR LCD Visibility Lakeside Sunset in the Mountains [Wallpaper] Taskbar Meters Turn Your Taskbar into a System Resource Monitor Create Shortcuts for Your Favorite or Most Used Folders in Ubuntu

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #031

    - by Pinal Dave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2007 Find Table without Clustered Index – Find Table with no Primary Key Clustered index is very important concept for any table. They impact the performance very heavily. Here is a quick script to find tables without a clustered index. Replace TEXT with VARCHAR(MAX) – Stop using TEXT, NTEXT, IMAGE Data Types Question: “Is VARCHAR (MAX) big enough to store the TEXT field?” Answer: “Yes, VARCHAR(MAX) is big enough to accommodate TEXT field. TEXT, NTEXT and IMAGE data types of SQL Server 2000 will be deprecated in a future version of SQL Server, SQL Server 2005 provides backward compatibility to data types but it is recommended to use new data types which are VARHCAR (MAX), NVARCHAR (MAX) and VARBINARY (MAX).” Limiting Result Sets by Using TABLESAMPLE – Examples Introduced in SQL Server 2005, TABLESAMPLE allows you to extract a sampling of rows from a table in the FROM clause. The rows retrieved are random and they are are not in any order. This sampling can be based on a percentage of number of rows. You can use TABLESAMPLE when only a sampling of rows is necessary for the application instead of a full result set. User Defined Functions (UDF) Limitations UDF have its own advantage and usage but in this article we will see the limitation of UDF. Things UDF can not do and why Stored Procedure are considered as more flexible then UDFs. Stored Procedure are more flexibility then User Defined Functions(UDF). However, this blog post is a good read to know what are the limitations of UDF. Change Database Compatible Level – Backward Compatibility For a long time SQL Server stayed on the compatibility level of 80 which is of SQL Server 2000. However, as soon as SQL Server 2005 introduced the issue of compatibility was quite a major issue. Since that time MS has been releasing the versions at every 2-3 years, changing compatibility is a ever popular topic. In this blog post, we learn how we can do the same using T-SQL. We can also do the same using SSMS and here is the blog post for the same: Change Database Compatible Level – Backward Compatibility – Part 2 – Management Studio. Constraint on VARCHAR(MAX) Field To Limit It Certain Length How can I limit the VARCHAR(MAX) field with maximum length of 12500 characters only. His Question was valid as our application was allowed 12500 characters. First of all – this requirement is bit strange but if someone wants to do the same, they can do it as described in this blog post. 2008 UNPIVOT Table Example Understanding UNPIVOT can be very complicated at times. In this blog post, I have attempted to explain the same concept in very simple words. Create Default Constraint Over Table Column A simple straight to script blog post – I still use this blog quite many times for my own reference. UDF – Get the Day of the Week Function It took me 4 iteration to find this very simple function which can immediately get the day of the week in a single line. 2009 Find Hostname and Current Logged In User Name There are two tricks listed in this blog post where users can find out the hostname and current logged user name immediately and very easily. Interesting Observation of Logon Trigger On All Servers When I was doing a project, I made an interesting observation of executing a logon trigger multiple times. It was absolutely unexpected for me! As I was logging only once, naturally, I was expecting the entry only once. However, it did it multiple times on different threads – indeed an eccentric phenomenon at first sight! Difference Between Candidate Keys and Primary Key One needs to be very careful in selecting the Primary Key as an incorrect selection can adversely impact the database architect and future normalization. For a Candidate Key to qualify as a Primary Key, it should be Non-NULL and unique in any domain. I have observed quite often that Primary Keys are seldom changed. I would like to have your feedback on not changing a Primary Key. Create Multiple Filegroup For Single Database Why should one create multiple file group for any database and what are the advantages of the same. In this blog post, I explain the same in detail. List All Objects Created on All Filegroups in Database In this blog post we discuss the essential question – “How can I find which object belongs to which filegroup. Is there any way to know this?” 2010 DATE and TIME in SQL Server 2008 When DATE is converted to DATETIME it adds the of midnight. When TIME is converted to DATETIME it adds the date of 1900 and it is something one wants to consider if you are going to run scripts from SQL Server 2008 to earlier version with CONVERT. Disabled Index and Update Statistics If you do not need a nonclustered index, I suggest you to drop it as keeping them disabled is an overhead on your system. This is because every time the statistics are updated for system all the statistics for disabled indexes are also updated. Precision of SMALLDATETIME – A 1 Minute Precision The precision of the datatype SMALLDATETIME is 1 minute. It discards the seconds by rounding up or rounding down any seconds greater than zero. 2011 Getting Columns Headers without Result Data – SET FMTONLY ON SET FMTONLY ON returns only metadata to the client. It can be used to test the format of the response without actually running the query. When this setting is ON the resultset only have headers of the results but no data. Copy Database from Instance to Another Instance – Copy Paste in SQL Server SQL Server has a feature which copy database from one database to another database and it can be automated as well using SSIS. Make sure you have SQL Server Agent Turned on as this feature will create a job. Puzzle – SELECT * vs SELECT COUNT(*) If you have ever wondered SELECT * gives error when executed alone but SELECT COUNT(*) does not. Why? in that case, you should read this blog post. Creating All New Database with Full Recovery Model This blog post is very based on very interesting story where the user wants to do something by default for every single new database created. Model database is a secret weapon which should be used very carefully and with proper evalution. If used carefully this can be a very much beneficiary when we need a newly created database behave in certain fashion. 2012 In year 2012 I had two interesting series ran on the blog. If there is no fun in learning, the learning becomes a burden. For the same reason, I had decided to build a three part quiz around SEQUENCE. The quiz was to identify the next value of the sequence. I encourage all of you to take part in this fun quiz. Guess the Next Value – Puzzle 1 Guess the Next Value – Puzzle 2 Guess the Next Value – Puzzle 3 Can anyone remember their final day of schooling?  This is probably a silly question because – of course you can!  Many people mark this as the most exciting, happiest day of their life.  It marks the end of testing, the end of following rules set by teachers, and the beginning of finally being able to earn money and work in your chosen field. Read five part series on developer training subject Developer Training - Importance and Significance - Part 1 Developer Training – Employee Morals and Ethics – Part 2 Developer Training – Difficult Questions and Alternative Perspective - Part 3 Developer Training – Various Options for Developer Training – Part 4 Developer Training – A Conclusive Summary- Part 5 Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Disqus ads are disqusting and here is how you turn them off

    - by Gopinath
    After couple of months I spent sometime yesterday reviewing my blog and coziie.com to see if everything is fine. Disqus, the best commenting system and an unusual suspect was looking weird. Commenting sections of my sites are displayed links of third party sites which I was not aware of. The content is annoying to me and I believe my site users are also annoyed. I don’t remember configuring something in disqus to display ads or earn money by promoting other’s content. Why on earth I would like to shows content of someone else’s website right inside comments section and annoy readers? Here is a screen grab of comment section that shows ads.   It turns to be disqus automatically enabled a feature called as “Discovery” to all publishers who upgraded the commenting system to the latest release. I remember upgrading commenting system to the latest release couple of months ago but I don’t remember specifically allowing disqus to spam my comment section!! I’m extremely unhappy with the way disqus automatically enabled spamming comment sections in the name of so called new features that benefits bloggers. How to turn of Discovery or Ads in Disqus I turned them off as soon as I noticed them and it’s very easy to do that. Here are the steps to be followed to turn off ads in comments Login in to disqus Switch to Settings tab Click on Discovery tab Choose the option Just comments Save the settings.  Though it’s easy to turn off the ads, it would have been nice if disqus did not enable them by default. Hey guys at disqus, you lost my trust and from now onwards I’ll double check before opting in to any new features.

    Read the article

  • elffile: ELF Specific File Identification Utility

    - by user9154181
    Solaris 11 has a new standard user level command, /usr/bin/elffile. elffile is a variant of the file utility that is focused exclusively on linker related files: ELF objects, archives, and runtime linker configuration files. All other files are simply identified as "non-ELF". The primary advantage of elffile over the existing file utility is in the area of archives — elffile examines the archive members and can produce a summary of the contents, or per-member details. The impetus to add elffile to Solaris came from the effort to extend the format of Solaris archives so that they could grow beyond their previous 32-bit file limits. That work introduced a new archive symbol table format. Now that there was more than one possible format, I thought it would be useful if the file utility could identify which format a given archive is using, leading me to extend the file utility: % cc -c ~/hello.c % ar r foo.a hello.o % file foo.a foo.a: current ar archive, 32-bit symbol table % ar r -S foo.a hello.o % file foo.a foo.a: current ar archive, 64-bit symbol table In turn, this caused me to think about all the things that I would like the file utility to be able to tell me about an archive. In particular, I'd like to be able to know what's inside without having to unpack it. The end result of that train of thought was elffile. Much of the discussion in this article is adapted from the PSARC case I filed for elffile in December 2010: PSARC 2010/432 elffile Why file Is No Good For Archives And Yet Should Not Be Fixed The standard /usr/bin/file utility is not very useful when applied to archives. When identifying an archive, a user typically wants to know 2 things: Is this an archive? Presupposing that the archive contains objects, which is by far the most common use for archives, what platform are the objects for? Are they for sparc or x86? 32 or 64-bit? Some confusing combination from varying platforms? The file utility provides a quick answer to question (1), as it identifies all archives as "current ar archive". It does nothing to answer the more interesting question (2). To answer that question, requires a multi-step process: Extract all archive members Use the file utility on the extracted files, examine the output for each file in turn, and compare the results to generate a suitable summary description. Remove the extracted files It should be easier and more efficient to answer such an obvious question. It would be reasonable to extend the file utility to examine archive contents in place and produce a description. However, there are several reasons why I decided not to do so: The correct design for this feature within the file utility would have file examine each archive member in turn, applying its full abilities to each member. This would be elegant, but also represents a rather dramatic redesign and re-implementation of file. Archives nearly always contain nothing but ELF objects for a single platform, so such generality in the file utility would be of little practical benefit. It is best to avoid adding new options to standard utilities for which other implementations of interest exist. In the case of the file utility, one concern is that we might add an option which later appears in the GNU version of file with a different and incompatible meaning. Indeed, there have been discussions about replacing the Solaris file with the GNU version in the past. This may or may not be desirable, and may or may not ever happen. Either way, I don't want to preclude it. Examining archive members is an O(n) operation, and can be relatively slow with large archives. The file utility is supposed to be a very fast operation. I decided that extending file in this way is overkill, and that an investment in the file utility for better archive support would not be worth the cost. A solution that is more narrowly focused on ELF and other linker related files is really all that we need. The necessary code for doing this already exists within libelf. All that is missing is a small user-level wrapper to make that functionality available at the command line. In that vein, I considered adding an option for this to the elfdump utility. I examined elfdump carefully, and even wrote a prototype implementation. The added code is small and simple, but the conceptual fit with the rest of elfdump is poor. The result complicates elfdump syntax and documentation, definite signs that this functionality does not belong there. And so, I added this functionality as a new user level command. The elffile Command The syntax for this new command is elffile [-s basic | detail | summary] filename... Please see the elffile(1) manpage for additional details. To demonstrate how output from elffile looks, I will use the following files: FileDescription configA runtime linker configuration file produced with crle dwarf.oAn ELF object /etc/passwdA text file mixed.aArchive containing a mixture of ELF and non-ELF members mixed_elf.aArchive containing ELF objects for different machines not_elf.aArchive containing no ELF objects same_elf.aArchive containing a collection of ELF objects for the same machine. This is the most common type of archive. The file utility identifies these files as follows: % file config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: ascii text mixed.a: current ar archive, 32-bit symbol table mixed_elf.a: current ar archive, 32-bit symbol table not_elf.a: current ar archive same_elf.a: current ar archive, 32-bit symbol table By default, elffile uses its "summary" output style. This output differs from the output from the file utility in 2 significant ways: Files that are not an ELF object, archive, or runtime linker configuration file are identified as "non-ELF", whereas the file utility attempts further identification for such files. When applied to an archive, the elffile output includes a description of the archive's contents, without requiring member extraction or other additional steps. Applying elffile to the above files: % elffile config dwarf.o /etc/passwd mixed.a mixed_elf.a not_elf.a same_elf.a config: Runtime Linking Configuration 64-bit MSB SPARCV9 dwarf.o: ELF 64-bit LSB relocatable AMD64 Version 1 /etc/passwd: non-ELF mixed.a: current ar archive, 32-bit symbol table, mixed ELF and non-ELF content mixed_elf.a: current ar archive, 32-bit symbol table, mixed ELF content not_elf.a: current ar archive, non-ELF content same_elf.a: current ar archive, 32-bit symbol table, ELF 64-bit LSB relocatable AMD64 Version 1 The output for same_elf.a is of particular interest: The vast majority of archives contain only ELF objects for a single platform, and in this case, the default output from elffile answers both of the questions about archives posed at the beginning of this discussion, in a single efficient step. This makes elffile considerably more useful than file, within the realm of linker-related files. elffile can produce output in two other styles, "basic", and "detail". The basic style produces output that is the same as that from 'file', for linker-related files. The detail style produces per-member identification of archive contents. This can be useful when the archive contents are not homogeneous ELF object, and more information is desired than the summary output provides: % elffile -s detail mixed.a mixed.a: current ar archive, 32-bit symbol table mixed.a(dwarf.o): ELF 32-bit LSB relocatable 80386 Version 1 mixed.a(main.c): non-ELF content mixed.a(main.o): ELF 64-bit LSB relocatable AMD64 Version 1 [SSE]

    Read the article

  • LASTDATE dates arguments and upcoming events #dax #tabular #powerpivot

    - by Marco Russo (SQLBI)
    Recently I had to write a DAX formula containing a LASTDATE within the logical condition of a FILTER: I found that its behavior was not the one I expected and I further investigated. At the end, I wrote my findings in this article on SQLBI, which can be applied to any Time Intelligence function with a <dates> argument.The key point is that when you write LASTDATE( table[column] )in reality you obtain something like LASTDATE( CALCULATETABLE( VALUES( table[column] ) ) )which converts an existing row context into a filter context.Thus, if you have something like FILTER( table, table[column] = LASTDATE( table[column] ) the FILTER will return all the rows of table, whereas you probably want to use FILTER( table, table[column] = LASTDATE( VALUES( table[column] ) ) )so that the existing filter context before executing FILTER is used to get the result from VALUES( table[column] ), avoiding the automatic expansion that would include a CALCULATETABLE that would hide the existing filter context.If after reading the article you want to get more insights, read the Jeffrey Wang's post here.In these days I'm speaking at SQLRally Nordic 2012 in Copenhagen and I will be in Cologne (Germany) next week for a SSAS Tabular Workshop, whereas Alberto will teach the same workshop in Amsterdam one week later. Both workshops still have seats available and the Amsterdam's one is still in early bird discount until October 3rd!Then, in November I expect to meet many blog readers at PASS Summit 2012 in Seattle and I hope to find the time to write other article on interesting things on Tabular and PowerPivot. Stay tuned!

    Read the article

  • Friday Tips #5

    - by Chris Kawalek
    Happy Friday, everyone! Following up on yesterday's post about Oracle VM VirtualBox being selected as the best virtualization solution for 2012 by the readers of Linux Journal, our Friday tip is about that very cool piece of software: Question: How do I move a VM from one machine to another with Oracle VM VirtualBox? Answer by Andy Hall, Product Management Director, Oracle Desktop Virtualization: There are a number of ways to do this, with pros and cons for each. The most reliable approach is to Export and Import virtual machines: From the VirtualBox manager, simply use the File…Export appliance menu and follow the wizard's lead. Move the resulting file(s) to the destination machine; and Import the VM into VirtualBox. This method will take longer and use more disk space than other methods because the configuration files and virtual hard drives are converted into an industry standard format (.ova or .ovf). But an advantage of this approach is that the creator of the virtual appliance can add a license which the importer will see and click-to-accept at import time. This is especially useful for ISVs looking to deliver pre-built, configured and tested appliances to their customers and prospects. Thanks Andy! Remember, if you have a question for us, use Twitter hashtag #AskOracleVirtualization. We'll see you next week! -Chris 

    Read the article

  • ArchBeat Link-o-Rama for December 14, 2012

    - by Bob Rhubart
    JMS Step 6 - How to Set Up an AQ JMS (Advanced Queueing JMS) for SOA Purposes | John-Brown Evans John Brown Evans' post continues the series of JMS articles that demonstrate how to use JMS queues in a SOA context. "This example leads you through the creation of an Oracle database Advanced Queue and the related WebLogic server objects in order to use AQ JMS in connection with a SOA composite," John explains. And if you missed the first 5 steps, don't worry – the post includes links. Cloud Deployment Models | B. R. Clouse Looking out for the cloud newbies... "As the cloud paradigm grows in depth and breadth, more readers are approaching the topic for the first time, or from a new perspective," says B. R. Clouse. "This blog is a basic review of cloud deployment models, to help orient newcomers and neophytes." Understanding the JSF Lifecycle and ADF Optimized Lifecycle | Steven Davelaar Would you call that a surprise ending? Oracle WebCenter & ADF Architecture Team (A-Team) member learned a lot more than he expected while creating a UKOUG presentation entitled "What you need to know about JSF to be succesful with ADF." Using Oracle Enterprise Manager Cloud Control 12c with Filer Snapshotting | Porus Homi Havewala This concise technical article includes a script for database backup using snapshots and cataloging in RMAN. Thought for the Day "A program which perfectly meets a lousy specification is a lousy program." — Cem Kaner Source: SoftwareQuotes.com

    Read the article

  • Cobol: science and fiction

    - by user847
    There are a few threads about the relevance of the Cobol programming language on this forum, e.g. this thread links to a collection of them. What I am interested in here is a frequently repeated claim based on a study by Gartner from 1997: that there were around 200 billion lines of code in active use at that time! I would like to ask some questions to verify or falsify a couple of related points. My goal is to understand if this statement has any truth to it or if it is totally unrealistic. I apologize in advance for being a little verbose in presenting my line of thought and my own opinion on the things I am not sure about, but I think it might help to put things in context and thus highlight any wrong assumptions and conclusions I have made. Sometimes, the "200 billion lines" number is accompanied by the added claim that this corresponded to 80% of all programming code in any language in active use. Other times, the 80% merely refer to so-called "business code" (or some other vague phrase hinting that the reader is not to count mainstream software, embedded systems or anything else where Cobol is practically non-existent). In the following I assume that the code does not include double-counting of multiple installations of the same software (since that is cheating!). In particular in the time prior to the y2k problem, it has been noted that a lot of Cobol code is already 20 to 30 years old. That would mean it was written in the late 60ies and 70ies. At that time, the market leader was IBM with the IBM/370 mainframe. IBM has put up a historical announcement on his website quoting prices and availability. According to the sheet, prices are about one million dollars for machines with up to half a megabyte of memory. Question 1: How many mainframes have actually been sold? I have not found any numbers for those times; the latest numbers are for the year 2000, again by Gartner. :^( I would guess that the actual number is in the hundreds or the low thousands; if the market size was 50 billion in 2000 and the market has grown exponentially like any other technology, it might have been merely a few billions back in 1970. Since the IBM/370 was sold for twenty years, twenty times a few thousand will result in a couple of ten-thousands of machines (and that is pretty optimistic)! Question 2: How large were the programs in lines of code? I don't know how many bytes of machine code result from one line of source code on that architecture. But since the IBM/370 was a 32-bit machine, any address access must have used 4 bytes plus instruction (2, maybe 3 bytes for that?). If you count in operating system and data for the program, how many lines of code would have fit into the main memory of half a megabyte? Question 3: Was there no standard software? Did every single machine sold run a unique hand-coded system without any standard software? Seriously, even if every machine was programmed from scratch without any reuse of legacy code (wait ... didn't that violate one of the claims we started from to begin with???) we might have O(50,000 l.o.c./machine) * O(20,000 machines) = O(1,000,000,000 l.o.c.). That is still far, far, far away from 200 billion! Am I missing something obvious here? Question 4: How many programmers did we need to write 200 billion lines of code? I am really not sure about this one, but if we take an average of 10 l.o.c. per day, we would need 55 million man-years to achieve this! In the time-frame of 20 to 30 years this would mean that there must have existed two to three million programmers constantly writing, testing, debugging and documenting code. That would be about as many programmers as we have in China today, wouldn't it? Question 5: What about the competition? So far, I have come up with two things here: 1) IBM had their own programming language, PL/I. Above I have assumed that the majority of code has been written exclusively using Cobol. However, all other things being equal I wonder if IBM marketing had really pushed their own development off the market in favor of Cobol on their machines. Was there really no relevant code base of PL/I? 2) Sometimes (also on this board in the thread quoted above) I come across the claim that the "200 billion lines of code" are simply invisible to anybody outside of "governments, banks ..." (and whatnot). Actually, the DoD had funded their own language in order to increase cost effectiveness and reduce the proliferation of programming language. This lead to their use of Ada. Would they really worry about having so many different programming languages if they had predominantly used Cobol? If there was any language running on "government and military" systems outside the perception of mainstream computing, wouldn't that language be Ada? I hope someone can point out any flaws in my assumptions and/or conclusions and shed some light on whether the above claim has any truth to it or not.

    Read the article

  • Improved Customer Experience, but at what Cost?

    - by Tony Berk
    We can all probably agree that improving your customers' experience is a good thing. But a key question many people are asking is will it help your organization and, in particular, what are the financial benefits?That's a good question, especially when companies ARE experiencing phenomenal return on investment (ROI). Of course, there are many factors that impact ROI or other measures of success, but we'd like to share some success stories as examples of customer experience in action and delivering positive results. If you would like to learn more about the economics of customer experience, see Brian Curran's presentation at the Oracle Customer Experience Summit last month. In this series of blog posts, we'll share actual customer stories. Today's example is Dell, which uses Oracle Real-Time Decisions (RTD) and Siebel CRM as part of their customer experience portfolio to better understand their customers' needs and wants and provide consistent interactions. Regular readers of this blog are probably familiar with Siebel, but RTD may be new to many of you. RTD is a complete decision management solution that delivers real-time decisions and recommendations and automatically renders decisions within a business process to create tailored messaging for every customer interaction.What does that mean? In the video below, Dell describes how customer experience is important not just for one interaction channel, but across all "vehicles." RTD is helping Dell understand customer behavior and communicate with the customer in a more relevant manner, across all communication  or interaction channels including sales and service call centers, email marketing and online. Dell continues to expand use of RTD because the benefits are showing up in sales, service and marketing results including 19% increase in close rates, faster issue resolution and 40% improvement in revenue per click in email marketing. Click here, to learn more about Oracle Customer Experience and stay tuned for more customer spotlights.

    Read the article

  • How to Get a Smartphone-Style Word Suggestion on Windows

    - by Zainul Franciscus
    Have you ever wished that you can type faster and better in Windows ? Then you’re in luck, because today we’ll show you how to get a smartphone’s word suggestion in Windows. To accomplish that, you need to install AI Type, a software that gives word suggestion when you write in Windows.  AI Type not only fulfils our gratification to have a smartphone-style word suggestion for Windows,  AI Type also improves our writings by suggesting word according to its context. It  will also try to match words according to the  probability in which other users may have used it. Installing AI Type is a breeze; Just download the installer from AI Type website, run the executable, fill in a registration form, and you’re all set to use AI Type for your daily writing. Once you’re done with the installation, AI Type appears on your system tray. Latest Features How-To Geek ETC Macs Don’t Make You Creative! So Why Do Artists Really Love Apple? MacX DVD Ripper Pro is Free for How-To Geek Readers (Time Limited!) HTG Explains: What’s a Solid State Drive and What Do I Need to Know? How to Get Amazing Color from Photos in Photoshop, GIMP, and Paint.NET Learn To Adjust Contrast Like a Pro in Photoshop, GIMP, and Paint.NET Have You Ever Wondered How Your Operating System Got Its Name? Sync Blocker Stops iTunes from Automatically Syncing The Journey to the Mystical Forest [Wallpaper] Trace Your Browser’s Roots on the Browser Family Tree [Infographic] Save Files Directly from Your Browser to the Cloud in Chrome and Iron The Steve Jobs Chronicles – Charlie and the Apple Factory [Video] Google Chrome Updates; Faster, Cleaner Menus, Encrypted Password Syncing, and More

    Read the article

  • Optimize Many-to-Many with SUMMARIZE and Other Techniques

    - by Marco Russo (SQLBI)
    We are still in the early days of DAX and even if I have been using it since 2 years ago, there is still a lot to learn on that. One of the topics that historically interests me (and many of the readers here, probably) is the many-to-many relationships between dimensions in a dimensional data model. When I and Alberto wrote the The Many to Many Revolution 2.0 we discovered the SUMMARIZE based pattern very late in the whitepaper writing. It is very important for performance optimization and it should be always used. In the last month, Gerhard Brueckl also presented an approach based on cross table filtering behavior that simplify the syntax involved, even if it’s harder to explain how it works internally. I published a short article titled Optimize Many-to-Many Calculation in DAX with SUMMARIZE and Cross Table Filtering on SQLBI website just to provide a quick reference to the three patterns available. A further study is still required to compare performance between SUMMARIZE and Cross Table Filtering patterns. Up to now, I haven’t observed big differences between them, even if their execution plans might be not identical and this suggest me that depending on other conditions you might favor one over the other.

    Read the article

  • As a developer, how do I learn sales? [closed]

    - by Dan Abramov
    I quit the company I was working for to pursuit an opportunity as a startup, and I believe in our product. I'm sure it's going to be great if we attract some customers first to keep going. (I don't want funding.) Our product is targeted at private schools and courses, and helps organize the mess other LMSs introduce. The problem is, our team is basically just me and I have very little idea about sales and marketing. I can do reasonably good copywriting but I'm sure I can do better—and being nervous or too techy in a real world conversation with the client doesn't help. I want to get better, in fact, a lot better at negotiating with clients and pitching my product. I did look for some “sales articles” on the web, and a lot of what I found is plain bullshit on SEO-engineered websites promoting books or $5000 courses. What I need instead is a developer's perspective on how to sale a product you think is great. What are typical programmer's mistakes and misconceptions about sales, and how to avoid them? How do you evolve into a reasonably great salesman? I can't believe it's in the mindset and unlearnable. Your own experience, combined with great articles available on the web is most welcome. To Future Readers The question got closed because it is not a good fit for this site. I found some helpful tips in a similar question asked on a sister StackExchange site about startups: I'm a terrible salesperson. What can I do about it?

    Read the article

  • USB device not accepting address

    - by Mike Williamson
    I have a series of machines that I am building for work that have usb card readers. When I boot them I get a long series of messages: ... [ 2347.768419] hub 1-6:1.0: unable to enumerate USB device on port 6 [ 2347.968178] usb 1-6.6: new full-speed USB device number 10 using ehci_hcd [ 2352.552020] usb 1-6.6: device not accepting address 10, error -32 [ 2352.568421] hub 1-6:1.0: unable to enumerate USB device on port 6 [ 2352.768179] usb 1-6.6: new full-speed USB device number 12 using ehci_hcd [ 2357.352033] usb 1-6.6: device not accepting address 12, error -32 ... On some older machines this only takes a few attempts before the card reader finally accepts an address, while on newer machines it can take many minutes. Changing hardware is not an option and plugging the usb card reader into a different port is only an option for the older manchines. This was a problem under 11.04 and I am now running the 12.04 beta and its still happening. Is there something I can do in the software (a udev rule perhaps?) that would fix this? Any advice appreciated. I'm happy to provide more details if you need them.

    Read the article

  • SQL Server DBA - How to get a good one!

    - by ETFairfax
    I'm a lone developer. I am currently developing an application which is seeing me get way way way out of my depth when it comes to SQL DBA'ing, and have come to realise that I should hire a DBA to help me (which has full support from the company). Problem is - who? This SO thread sees someone hire a DBA only to realise that they will probably cause more harm then good! Also, I have just had a bad experience with a ASP.NET/C# contractor that has let us down. So, can anyone out there on SO either... a) Offer their services. b) Forward me onto someone that could help. c) Give some tips on vetting a DBA. I know this isn't a recruitment site, so maybe some good answers for c) would be a benefit for other readers!! BTW: The database is SQL Server 2008. I'm running into performance issues (mainly timeouts) which I think would be sorted out by some proper indexing. I would also need the DBA to provide some sort of maintenance plan, and to review how our database will deal what we intend at throwing at it in the future!

    Read the article

  • Announcing SharePoint Saturday Columbus 2010

    - by Brian Jackett
    It is with great pleasure that today I can announce the very first SharePoint Saturday Columbus.  SharePoint Saturday Columbus 2010 will be happening on August 14th at The Conference Center at OCLC in Dublin, OH.  As many of the readers of my blog may be aware I’ve attended or spoken at over half a dozen SharePoint Saturdays in the past 8 months alone, but this will be my first time actually organizing one.  Myself and a group of very dedicated individuals have been hard at work the past few months getting the ball rolling and we’re happy to see it taking shape.   Pertinent Resources Website – find announcements and up to the date details at www.SharePointSaturday.org/Columbus Twitter – follow us at @SPSColumbus Email – email us at [email protected] with any questions, comments, or concerns   What can you do?     There are three main areas that we are looking for your help at this time. Spread the word – simply put start spreading the word to friends, coworkers, user groups, clients, and anyone else you think may be interested in SharePoint Saturday Columbus 2010.  We’ll be opening registration in early July so look for an announcement with details closer to that timeframe. Sponsorship – if your company or a company you know is interested in sponsoring SharePoint Saturday Columbus 2010 we have many opportunity levels available.  Email [email protected] for more information and we’ll send you a sponsorship packet. Speakers – if you or someone you know is interested in presenting at SharePoint Saturday Columbus 2010 please fill out a speaker submission form found here and email it to [email protected] by July 10th. I hope you can join us for this great event!         -Frog Out

    Read the article

  • WCF - Automatically create ServiceHost for multiple services

    - by Rajesh Pillai
    WCF - Automatically create ServiceHost for multiple services Welcome back readers!  This blog post is about a small tip that may make working with WCF servicehost a bit easier, if you have lots of services and you need to quickly host them for testing. Recently I was encountered a situation where we were faced to create multiple service host quickly for testing.  Here is the code snippet which is pretty self explanatory.  You can put this code in your service host which in this case is  a console application. class Program   {       static void Main(string[] args)       { // Stores all hosts           List<ServiceHost> hosts = new List<ServiceHost>();           try           { // Get the services element from the serviceModel element in the config file               var section = ConfigurationManager.GetSection("system.serviceModel/services") as ServicesSection;               if (section != null)               {                   foreach (ServiceElement element in section.Services)                   { // NOTE : If the assembly is in another namespace, provide a fully qualified name here in the form // <typename, namespace> // For e.g. Business.Services.CustomerService, Business.Services                       var serviceType = Type.GetType(element.Name); // Get the typeName                        var host = new ServiceHost(serviceType);                       hosts.Add(host); // Add to the host collection                       host.Open(); // Open the host                   }               }               Console.ReadLine();           }           catch (Exception e)           {               Console.WriteLine(e.Message);               Console.ReadLine();           }           finally           {               foreach (ServiceHost host in hosts)               {                   if (host.State == CommunicationState.Opened)                   {                       host.Close();                   }                   else                   {                       host.Abort();                   }               }           }       }   } I hope you find this useful.  You can make this as a windows service if required.

    Read the article

  • Wanted: Java Code Brainteasers

    - by Tori Wieldt
    The Jan/Feb Java Magazine will go out next week. It's full of great Java stories, interviews and technical articles. It also includes a Fix This section; the idea of this section is challenging a Java developer's coding skills. It's a multiple-choice brainteaser that includes code and possible answers. The answer is provided in the next issue. For an example, check out Fix This in the Java Magazine premier issue. We are looking for community submissions to Fix This. Do you have a good code brain teaser? Remember, you want tease your fellow devs, not stump them completely! If you have a submission, here's what you do:  1. State the problem, including a short summary of the tool/technique, in about 75 words. 2. Send us the code snippet, with a short set-up so readers know what they are looking at (such as, "Consider the following piece of code to have database access within a Servlet.") 3. Provide four multiple-choice answers to the question, "What's the fix?" 4. Give us the answer, along with a brief explanation of why. 5. Tell us who you are (name, occupation, etc.) 6. Email the above to JAVAMAG_US at ORACLE.COM with "Fix This Submission" in the title. Deadlines for Fix This for next two issues of Java Magazine are Dec. 12th and Jan. 15th. Bring It!

    Read the article

  • What You Said: How You Organize a Messy Music Collection

    - by Jason Fitzpatrick
    Earlier this week we asked you to share your tips, tricks, and tools, for managing a messy music collection. Now we’re back to share so great reader tips; read on to find ways to tame your mountain of music. Several readers were, despite having tried various techniques over the years, fans of doing things largely the manual way. Aurora900 explains: I spent a weekend sorting everything myself once. Took a while, but now I have folders sorted by artist, and within the artist folders are folders for their albums. With my collection at about 260gb, it can be a daunting task, but it’s well worth it in the end. I don’t have the tagging issue as I make sure anything I have is properly tagged to begin with… If I’m ripping a CD I use Easy CD-DA Extractor, which automatically searches a database on the internet for the tags. If I’m downloading something, if its from a reputable source its going to be properly tagged already. Bilbo Baggins would love to automate, but eclectic music tastes make it hard: How to Own Your Own Website (Even If You Can’t Build One) Pt 3 How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2

    Read the article

  • How to Assure an Effective Data Model

    As a general rule in my opinion the effectiveness of a data model can be directly related to the accuracy and complexity of a project’s requirements. For example there is no need to work on very detailed data models when the details surrounding a specific data model have not been defined or even clarified. Developing data models when the clarity of project requirements is limited tends to introduce designed issues because the proper details to create an effective data model are not even known. One way to avoid this issue is to create data models that correspond to the complexity of the existing project requirements so that when requirements are updated then new data models can be created based any new discoveries regarding requirements on a fine grain level.  This allows for data models to be composed of general entities to be created initially when a project’s requirements are very vague and then the entities are refined as new and more substantial requirements are defined or redefined. This promotes communication amongst all stakeholders within a project as they go through the process of defining and finalizing project requirements.In addition, here are some general tips that can be applied to projects in regards to data modeling.Initially model all data generally and slowly reactor the data model as new requirements and business constraints are applied to a project.Ensure that data modelers have the proper tools and training they need to design a data model accurately.Create a common location for all project documents so that everyone will be able to review a project’s data models along with any other project documentation.All data models should follow a clear naming schema that tells readers the intended purpose for the data and how it is going to be applied within a project.

    Read the article

  • rfid programming

    - by MaKo
    hi guys, I got a gift from a friend, 2 readers for RFID, and some cards (from a Chinese company called daily rfid), the kind of work, because it comes with some demo software written in Delphi, that reads the id of the card (myfare compatible, ISO14443A ) but the problem is that if I try to use the demo to write to them, it doesnt seem to work, it have another demo written in c#, (compiled and executable from /bin/debug, the DL600DemoCSharp.exe), the software opens, but when click on connect, I get this error Unhanded exception.. unable to load DLL 'BasicB.DLL' so I load the dll on windows/system32, but when I try regsvr32 BasicB.dll I get, error the module "BasicB.dll" was loaded but hte entry-point DllRegisterServer was not found. Make sure that "BasicB.dll" is a valid DLL or OCX file and then try again have written to the company but no response, I program in objective C, so I kind of understand c#, but how to make this cards work? - shall I continue with the delphi, and try to write to them with it - or with c# - either way I would have to write the code to read write to them,, or is there any software to work with this modules?? thanks a lot!

    Read the article

  • Interesting Blog Stats&ndash;What Sells

    - by Tim Murphy
    Just out of curiosity I decided to find out what the most frequently post were on my blog.  I knew what number one would be just from checking daily stats from time to time.  The main theme that I found in the data is that either pain or humor can really bring people to find your posts.  My most viewed post is on turning off Toshiba Flashcards at over 54K views (I think Toshiba should take notice of this massive fail).  The second highest is on Interesting Blog titles.  This was nothing more than a post that I had put up on a whim of humorous blog titles I had run across.  This post earned over 26K views.  Going down from there the theme stays the same either people looking for something humorous or people with a problem that you have an answer for are the posts that are most likely to get attention.  Remember that blogging can be a great service to your readers.  Keep it interesting and they will come. del.icio.us Tags: Blogging,Blog Topics,Blog Stats

    Read the article

  • SQL Source Control Contest

    - by Ajarn Mark Caldwell
    If you’re a regular reader of this blog, you know that I have written several posts about how important I think it is to protect your source code, to version it, and in particular, all the aspects I like about Red Gate’s SQL Source Control product.  But for a moment, let’s take a break from my writing and I want to hear your stories.  What nightmare situation are you in, or can you imagine, where source control for your database would save the world.  Or maybe your life is not so dramatic, but you do see a challenge that, if you just had a good tool like SQL Source Control, it would go much smoother.  What’s your pain?  You have read my writings, now tell me your story, and be in the running for a free copy of SQL Source Control from Red Gate. Yes, that’s right.  Although I am just a fan of Red Gate, they have authorized me to give out a handful of licenses to blog readers who are willing to share their story by posting a comment to this blog entry.  Simply add your comment below (be sure to include a valid email address in the box that asks for that) to be entered.  The contest starts immediately and over the next few days, the best stories will win.

    Read the article

  • FOUR questions to ask if you are implementing DATABASE-AS-A-SERVICE

    - by Sudip Datta
    During my ongoing tenure at Oracle, I have met all types of DBAs. Happy DBAs, unhappy DBAs, proud DBAs, risk-loving DBAs, cautious DBAs. These days, as Database-as-a-Service (DBaaS) becomes more mainstream, I find some complacent DBAs who are basking in their achievement of having implemented DBaaS. Some others, however, are not that happy. They grudgingly complain that they did not have much of a say in the implementation, they simply had to follow what their cloud architects (mostly infrastructure admins) offered them. In most cases it would be a database wrapped inside a VM that would be labeled as “Database as a Service”. In other cases, it would be existing brute-force automation simply exposed in a portal. As much as I think that there is more to DBaaS than those approaches and often get tempted to propose Enterprise Manager 12c, I try to be objective. Neither do I want to dampen the spirit of the happy ones, nor do I want to stoke the pain of the unhappy ones. As I mentioned in my previous post, I don’t deny vanilla automation could be useful. I like virtualization too for what it has helped us accomplish in terms of resource management, but we need to scrutinize its merit on a case-by-case basis and apply it meaningfully. For DBAs who either claim to have implemented DBaaS or are planning to do so, I simply want to provide four key questions to ponder about: 1. Does it make life easier for your end users? Database-as-a-Service can have several types of end users. Junior DBAs, QA Engineers, Developers- each having their own skillset. The objective of DBaaS is to make their life simple, so that they can focus on their core responsibilities without having to worry about additional stuff. For example, if you are a Developer using Oracle Application Express (APEX), you want to deal with schema, objects and PL/SQL code and not with datafiles or listener configuration. If you are a QA Engineer needing database copies for functional testing, you do not want to deal with underlying operating system patching and compliance issues. The question to ask, therefore, is, whether DBaaS makes life easier for those users. It is often convenient to give them VM shells to deal with a la Amazon EC2 IaaS, but is that what they really want? Is it a productive use of a developer's time if he needs to apply RPM errata to his Linux operating system. Asking him to keep the underlying operating system current is like making a guest responsible for a restaurant's decor. 2. Does it make life easier for your administrators? Cloud, in general, is supposed to free administrators from attending to mundane tasks like provisioning services for every single end user request. It is supposed to enable a readily consumable platform and enforce standardization in the process. For example, if a Service Catalog exposes DBaaS of specific database versions and configurations, it, by its very nature, enforces certain discipline and standardization within the IT environment. What if, instead of specific database configurations, cloud allowed each end user to create databases of their liking resulting in hundreds of version and patch levels and thousands of individual databases. Therefore the right question to ask is whether the unwanted consequence of DBaaS is OS and database sprawl. And if so, who is responsible for tracking them, backing them up, administering them? Studies have shown that these administrative overheads increase exponentially with new targets, and it could result in a management nightmare. That leads us to our next question. 3. Does it satisfy your Security Officers and Compliance Auditors? Compliance Auditors need to know who did what and when. They also want the cloud platform to be secure, so that end users have little freedom in tampering with it. Dealing with VM sprawl is not the easiest of challenges, let alone dealing with them as they keep getting reconfigured and moved around. This leads to the proverbial needle in the haystack problem, and all it needs is one needle to cause a serious compliance issue in the enterprise. Bottomline is, flexibility and agility should not come at the expense of compliance and it is very important to get the balance right. Can we have security and isolation without creating compliance challenges? Instead of a ‘one size fits all approach’ i.e. OS level isolation, can we think smartly about database isolation or schema based isolation? This is where the appropriate resource modeling needs to be applied. The usual systems management vendors out there with heterogeneous common-denominator approach have compromised on these semantics. If you follow Enterprise Manager’s DBaaS solution, you will see that we have considered different models, not precluding virtualization, for different customer use cases. The judgment to use virtual assemblies versus databases on physical RAC versus Schema-as-a-Service in a single database, should be governed by the need of the applications and not by putting compliance considerations in the backburner. 4. Does it satisfy your CIO? Finally, does it satisfy your higher ups? As the sponsor of cloud initiative, the CIO is expected to lead an IT transformation project, not merely a run-of-the-mill IT operations. Simply virtualizing server resources and delivering them through self-service is a good start, but hardly transformational. CIOs may appreciate the instant benefit from server consolidation, but studies have revealed that the ROI from consolidation would flatten out at 20-25%. The question would be: what next? As we go higher up in the stack, the need to virtualize, segregate and optimize shifts to those layers that are more palpable to the business users. As Sushil Kumar noted in his blog post, " the most important thing to note here is the enterprise private cloud is not just an IT project, rather it is a business initiative to create an IT setup that is more aligned with the needs of today's dynamic and highly competitive business environment." Business users could not care less about infrastructure consolidation or virtualization - they care about business agility and service level assurance. Last but not the least, lot of CIOs get miffed if we ask them to throw away their existing hardware investments for implementing DBaaS. In Oracle, we always emphasize on freedom of choosing a platform; hence Enterprise Manager’s DBaaS solution is platform neutral. It can work on any Operating System (that the agent is certified on) Oracle’s hardware as well as 3rd party hardware. As a parting note, I urge you to remember these 4 questions. Remember that your satisfaction as an implementer lies in the satisfaction of others.

    Read the article

  • Blogger Blog Takes Ages to Load after Custom Domain Redirection

    - by abhisek
    I recently bought a custom domain for a blogger blog (technabled.com) I have for sometime now. I followed the instructions on blogger's documentation. I added A-name records and CNAME records with my DNS provider. But, now, some strange problems are cropping up. If I connect to my broadband network and then ping technabled.com, it times out. Then, if I visit the webpage, which takes almost one and half minutes to load, and then if I ping technabled.com, it shows expected result. This is not just me. I asked some of the regular readers, who reported the same issue. As a result of this, I am losing a lot of visits. What is stranger is that the subsequent visits to the blog is faster. I have checked with a few online services to test the performance. WebPageTest seems to say the same thing: http://www.webpagetest.org/result/110117_1N_7PE/ (please see the First View / Repeat View time) Also, the pagespeed score is not that bad. So I am ruling out other possibilities. I am at a loss as to what I should do to find a solution. Help is much appreciated. :)

    Read the article

  • Java EE 7 Survey Results!

    - by reza_rahman
    On November 8th, the Java EE EG posted a survey to gather broad community feedback on a number of critical open issues. For reference, you can find the original survey here. We kept the survey open for about three weeks until November 30th. To our delight, over 1100 developers took time out of their busy lives to let their voices be heard! The results of the survey were sent to the EG on December 12th. The subsequent EG discussion is available here. The exact summary sent to the EG is available here. We would like to take this opportunity to thank each and every one the individuals who took the survey. It is very appreciated, encouraging and worth it's weight in gold. In particular, I tried to capture just some of the high-quality, intelligent, thoughtful and professional comments in the summary to the EG. I highly encourage you to continue to stay involved, perhaps through the Adopt-a-JSR program. We would also like to sincerely thank java.net, JavaLobby, TSS and InfoQ for helping spread the word about the survey. Below is a brief summary of the results... APIs to Add to Java EE 7 Full/Web Profile The first question asked which of the four new candidate APIs (WebSocket, JSON-P, JBatch and JCache) should be added to the Java EE 7 Full and Web profile respectively. As the following graph shows, there was significant support for adding all the new APIs to the full profile: Support is relatively the weakest for Batch 1.0, but still good. A lot of folks saw WebSocket 1.0 as a critical technology with comments such as this one: "A modern web application needs Web Sockets as first class citizens" While it is clearly seen as being important, a number of commenters expressed dissatisfaction with the lack of a higher-level JSON data binding API as illustrated by this comment: "How come we don't have a Data Binding API for JSON" JCache was also seen as being very important as expressed with comments like: "JCache should really be that foundational technology on which other specs have no fear to depend on" The results for the Web Profile is not surprising. While there is strong support for adding WebSocket 1.0 and JSON-P 1.0 to the Web Profile, support for adding JCache 1.0 and Batch 1.0 is relatively weak. There was actually significant opposition to adding Batch 1. 0 (with 51.8% casting a 'No' vote). Enabling CDI by Default The second question asked was whether CDI should be enabled in Java EE environments by default. A significant majority of 73.3% developers supported enabling CDI, only 13.8% opposed. Comments such as these two reflect a strong general support for CDI as well as a desire for better Java EE alignment with CDI: "CDI makes Java EE quite valuable!" "Would prefer to unify EJB, CDI and JSF lifecycles" There is, however, a palpable concern around the performance impact of enabling CDI by default as exemplified by this comment: "Java EE projects in most cases use CDI, hence it is sensible to enable CDI by default when creating a Java EE application. However, there are several issues if CDI is enabled by default: scanning can be slow - not all libs use CDI (hence, scanning is not needed)" Another significant concern appears to be around backwards compatibility and conflict with other JSR 330 implementations like Spring: "I am leaning towards yes, however can easily imagine situations where errors would be caused by automatically activating CDI, especially in cases of backward compatibility where another DI engine (such as Spring and the like) happens to use the same mechanics to inject dependencies and in that case there would be an overlap in injections and probably an uncertain outcome" Some commenters such as this one attempt to suggest solutions to these potential issues: "If you have Spring in use and use javax.inject.Inject then you might get some unexpected behavior that could be equally confusing. I guess there will be a way to switch CDI off. I'm tempted to say yes but am cautious for this reason" Consistent Usage of @Inject The third question was around using CDI/JSR 330 @Inject consistently vs. allowing JSRs to create their own injection annotations. A slight majority of 53.3% developers supported using @Inject consistently across JSRs. 28.8% said using custom injection annotations is OK, while 18.0% were not sure. The vast majority of commenters were strongly supportive of CDI and general Java EE alignment with CDI as illistrated by these comments: "Dependency Injection should be standard from now on in EE. It should use CDI as that is the DI mechanism in EE and is quite powerful. Having a new JSR specific DI mechanism to deal with just means more reflection, more proxies. JSRs should also be constructed to allow some of their objects Injectable. @Inject @TransactionalCache or @Inject @JMXBean etc...they should define the annotations and stereotypes to make their code less procedural. Dog food it. If there is a shortcoming in CDI for a JSR fix it and we will all be grateful" "We're trying to make this a comprehensive platform, right? Injection should be a fundamental part of the platform; everything else should build on the same common infrastructure. Each-having-their-own is just a recipe for chaos and having to learn the same thing 10 different ways" Expanding the Use of @Stereotype The fourth question was about expanding CDI @Stereotype to cover annotations across Java EE beyond just CDI. A significant majority of 62.3% developers supported expanding the use of @Stereotype, only 13.3% opposed. A majority of commenters supported the idea as well as the theme of general CDI/Java EE alignment as expressed in these examples: "Just like defining new types for (compositions of) existing classes, stereotypes can help make software development easier" "This is especially important if many EJB services are decoupled from the EJB component model and can be applied via individual annotations to Java EE components. @Stateless is a nicely compact annotation. Code will not improve if that will have to be applied in the future as @Transactional, @Pooled, @Secured, @Singlethreaded, @...." Some, however, expressed concerns around increased complexity such as this commenter: "Could be very convenient, but I'm afraid if it wouldn't make some important class annotations less visible" Expanding Interceptor Use The final set of questions was about expanding interceptors further across Java EE... A very solid 96.3% of developers wanted to expand interceptor use to all Java EE components. 35.7% even wanted to expand interceptors to other Java EE managed classes. Most developers (54.9%) were not sure if there is any place that injection is supported that should not support interceptors. 32.8% thought any place that supports injection should also support interceptors. Only 12.2% were certain that there are places where injection should be supported but not interceptors. The comments reflected the diversity of opinions, generally supportive of interceptors: "I think interceptors are as fundamental as injection and should be available anywhere in the platform" "The whole usage of interceptors still needs to take hold in Java programming, but it is a powerful technology that needs some time in the Sun. Basically it should become part of Java SE, maybe the next step after lambas?" A distinct chain of thought separated interceptors from filters and listeners: "I think that the Servlet API already provides a rich set of possibilities to hook yourself into different Servlet container events. I don't find a need to 'pollute' the Servlet model with the Interceptors API"

    Read the article

< Previous Page | 219 220 221 222 223 224 225 226 227 228 229 230  | Next Page >