Search Results

Search found 244 results on 10 pages for 'disadvantage'.

Page 9/10 | < Previous Page | 5 6 7 8 9 10  | Next Page >

  • Media Archive System with branches?

    - by Ian McEwen
    In short, how can I get VCS features (revisioning, branching, and deduplication) for a media collection that's far too large for most/all VCS systems? Background I have a 300GB music folder; unfortunately, I only have the hard drive space for this on my desktop system. However, a good portion of my collection is FLAC; therefore, I could theoretically have a space-optimized version in which I transcode all the FLAC to mp3 or some other lossy format, and use only that version on the laptop. However, a portion of my collection isn't FLAC. And that which isn't FLAC shouldn't be transcoded to an equivalent format; it won't have any space savings, which is the point. Moreover, it shouldn't be duplicated: the mp3/ogg portions of the collection should probably be exactly the same files. Thoughts One solution is to have format-specific organization of my music folders, and use some script to transcode the FLAC directory to mp3 or such into another directory. Another is some sort of hack using entirely separate copies and symbolic links for deduplication, or something similar. But these also have a disadvantage of lacking versioning; I'd like to be able to reorganize my music collection, retag things, etc. and save history. This isn't key, but would be awfully nice. I can't see it as entirely unreasonable to set up VCS hooks or something equivalent to keep directory structure synced between two copies, update tags, and transcode FLAC automatically into the space-optimized copy. Basically, the system I really want is a version control system. Two branches: one archival/desktop branch including the FLAC, one space-optimized/laptop branch without it; most VCSes would deal well with whole chunks being the same files well by compressing in a reasonable way (i.e. don't keep two copies of the same data). I could also do a lot of what I talk about above with hooks. But I don't know of any VCS that would deal with a 300GB repository with almost 20k files. Many of them would just not even initialize the whole affair; others would just do it inexpressibly slowly or otherwise badly. checkpoint looks like it's designed for something close (it's at least for media), but wouldn't do deduplication well (and I'm not convinced I'd be able to script it to do things like automatic transcoding and directory-structure syncing). So. Is there anything out there that can do all this, or should I consider it a programming project?

    Read the article

  • Hardware for multipurpose home server

    - by Michael Dmitry Azarkevich
    Hi guys, I'm looking to set up a multipurpose home server and hoped you could help me with the hardware selection. First of all, the services it will provide: Hosting a MySQL database (for training and testing purposes) FTP server Personal Mail Server Home media server So with this in mind I've done some research, and found some viable solutions: A standard PC with the appropriate software (Either second hand or new) A non-solid state mini-ITX system A solid state, fanless mini-ITX system I've also noted the pros and cons of each system: A standard second hand PC with old hardware would be the cheapest option. It could also have lacking processing power, not enough RAM and generally faulty hardware. Also, huge power consumption heat generation and noise levels. A standard new PC would have top-notch hardware and will stay that way for quite some time, so it's a good investment. But again, the main problem is power consumption, heat generation and noise levels. A non-solid state mini-ITX system would have the advantages of lower power consumption, lower cost (as far as I can see) and long lasting hardware. But it will generate noise and heat which will be even worse because of the size. A solid state, fanless mini-ITX system would have all the advantages of a non-solid state mini-ITX but with minimal noise and heat. The main disadvantage is the read\write problems of flash memory. All in all I'm leaning towards a non-solid state mini-ITX because of the read\write issues of flash memory. So, after this overview of what I do know, my questions are: Are all these services even providable from a single server? To my best understanding they are, but then again, I might be wrong. Is any of these solutions viable? If yes, which one is the best for my purposes? If not, what would you suggest? Also, on a more software oriented note: OS wise, I'm planning to run Linux. I'm currently thinking of four options I've been recommended: CentOS, Gentoo, DSL (Damn Small Linux) and LFS (Linux From Scratch). Any thoughts on this? Any other distro you would recomend? Regarding FTP services, I've herd good things about FileZila. Anyone has any experience with that? Do you recommend it? Do you recommend something else? Regarding the Mail service, I know nothing about this except that it exists. Any software you recommend for this task? Home media, same as mail service. Any recommended software? Thank you very much.

    Read the article

  • Convert a PDF eBook to ePub Format

    - by Matthew Guay
    Would you like to read a PDF eBook on an eReader or mobile device, but aren’t happy with the performance? Here’s how you can convert your PDFs to the popular ePub format so you can easily read them on any device. PDFs are a popular format for eBooks since they render the same on any device and can preserve the exact layout of the print book.  However, this benefit is their major disadvantage on mobile devices, as you often have to zoom and pan back and forth to see everything on the page.  ePub files, on the other hand, are an increasingly popular option. They can reflow to fill your screen instead of sticking to a strict layout style.  With the free Calibre program, you can quickly convert your PDF eBooks to ePub format. Getting Started Download the Calibre installer (link below) for your operating system, and install as normal.  Calibre works on recent versions of Windows, OS X, and Linux.  The Calibre installer is very streamlined, so the install process was quite quick. Calibre is a great application for organizing your eBooks.  It can automatically sort your books by their metadata, and even display their covers in a Coverflow-style viewer. To add an eBook to your library, simply drag-and-drop the file into the Calibre window, or click Add books at the top.  Here you can choose to add all the books from a folder and more. Calibre will then add the book(s) to your library, import the associated metadata, and organize them in the catalog. Convert your Books Once you’ve imported your books into Calibre, it’s time to convert them to the format you want.  Select the book or books you want to convert, and click Convert E-books.  Select whether you want to convert them individually or bulk convert them. The convertor window has lots of options, so you can get your ePub book exactly like you want.  You can simply click Ok and go with the defaults, or you can tweak the settings. Do note that the conversion will only work successfully with PDFs that contain actual text.  Some PDFs are actually images scanned in from the original books; these will appear just like the PDF after the conversion, and won’t be any easier to read. On the first tab, you’ll notice that Calibri will repopulate most of the metadata fields with info from your PDF.  It will also use the first page of the PDF as the cover.  Edit any of the information that may be incorrect, and add any additional information you want associated with the book. If you want to convert your eBook to a different format other than ePub, Calibri’s got you covered, too.  On the top right, you can choose to output the converted eBook into a many different file formats, including the Kindle-friendly MOBI format. One other important settings page is the Structure Detection tab.  Here you can choose to have it remove headers and footers in the converted book, as well as automatically detect chapter breaks. Click Ok when you’ve finished choosing your settings and Calibre will convert the book.  This may take a few minutes, depending on the size of the PDF.  If the conversion seems to be taking too long, you can click Show job details for more information on the progress.   The conversion usually works good, but we did have one job freeze on us.  When we checked the job details, it indicated that the PDF was copy-protected.  Most PDF eBooks, however, worked fine. Now, back in the main Calibri window, select your book and save it to disk.  You can choose to save only the EPUB format, or you can select Save to disk to save all formats of the book to your computer. You can also view the ePub file directly in Calibri’s built-in eBook viewer.  This is the PDF book we converted, and it looks fairly good in the converted format.  It does have some odd line breaks and some misplaced numbers, but on the whole, the converted book is much easier to read, especially on small mobile devices.   Even images get included inline, so you shouldn’t be missing anything from the original eBook. Conclusion Calibri makes it simple to read your eBooks in any format you need. It is a project that is in constant development, and updates regularly adding better stability and features.  Whether you want to ready your PDF eBooks on a Sony Reader, Kindle, netbook or Smartphone, your books will now be more accessible than ever.  And with thousands of free PDF eBooks out there, you’ll be sure to always have something to read. If you’d like some Geeky PDF eBooks, Microsoft Press is offering a number of free PDF eBooks right now.  Check them out at this link (Account Required). Download the Calibre eBook program Similar Articles Productive Geek Tips Format a String as Currency in C#Convert Older Excel Documents to Excel 2007 FormatShare OneNote 2010 Notebooks with OneNote 2007Install an RPM Package on Ubuntu LinuxConvert PDF Files to Word Documents and Other Formats TouchFreeze Alternative in AutoHotkey The Icy Undertow Desktop Windows Home Server – Backup to LAN The Clear & Clean Desktop Use This Bookmarklet to Easily Get Albums Use AutoHotkey to Assign a Hotkey to a Specific Window Latest Software Reviews Tinyhacker Random Tips HippoRemote Pro 2.2 Xobni Plus for Outlook All My Movies 5.9 CloudBerry Online Backup 1.5 for Windows Home Server Nice Websites To Watch TV Shows Online 24 Million Sites Windows Media Player Glass Icons (icons we like) How to Forecast Weather, without Gadgets Outlook Tools, one stop tweaking for any Outlook version Zoofs, find the most popular tweeted YouTube videos

    Read the article

  • Dual Boot Oracle Solaris 11/11 and Linux (Ubuntu 11.10/grub2)

    - by HartmutStreppel
    After having worked with Open Solaris on my laptop first, then with an upgrade to Oracle Solaris 11 Express, I finally did a fresh install of Oracle Solaris 11/11, when it became available. I am not a big fan of upgrades as I know that I am not the perfect administrator and my system gets spoiled with unclean configurations, outdated packages and wrong settings that cannot be reversed. So I prefer to start from scratch. Especially with Oracle Solaris 11 I wanted to have a system just like a customer would have it in production. The installation was smooth - more or less, if I had only read the documentation a bit better in advance. For a number of reasons I prefer a dual boot system. The most important one is, that especially with mobile devices you often run into network problems. And you have a hard time figuring out where the problem is: in your laptop hardware, in the OS you are running, or really within the network. If you have an alternate OS to boot, you can exclude the OS and your hardware. This makes you feel better. The second OS should be a Linux variant - and for some not so obvious reason I decided to go with the latest Ubuntu release (11.10). It replaced a very old Open Suse installation that had not been booted for a while. I knew that it was probably best to install Ubuntu first and then Oracle Solaris 11, as this would put the right boot information for Oracle Solaris  into the MBR and onto the root partition. But then, how to enable dual boot with the 2 OSes. Searching the web one mainly finds information about dual boot of: Linux and Linux Linux and Windows I do not want to explain which wrong configurations I worked through, but I prefer to explain the final setup, which is extremely simple, and I am wondering why this is not covered as the easiest solution for most dual boot setups. I use chainloader from and to both OS'es, with the only disadvantage that I have to confirm two grub menus each time I want to boot the "other" OS. Still there were some hurdles to jump over: Ubuntu did not like getting its boot blocks being placed on the partition instead of the disk; I must admit that I do not fully understand why. But using the --force option you could get that done Ubuntu needs an active partition; that was easy to achieve grub2 uses a different numbering scheme for the partitions. That is in the docs, if you read them. BTW: The usual disclaimer is valid. There is  no guarantee that what I describe works or works well. Please back up your data carefully before trying any of this. So, Oracle Solaris 11 is installed on the first partition and Ubuntu on the third. With Ubtuntu things initially were a bit more complicated, as I did not know how to boot it. And the live CD did not offer the capability to boot the on-disk image (at least I did not find it). So I booted the live CD, mounted the Ubuntu installation at /mnt and wrote the boot blocks into the partition. This is something that does not seem to be recommended, at least grub-install refrained from doing what I intended. After a bit more research I was bold enough to use the --force option and wrote the boot blocks to /dev/sda3 using grub-install --boot-directory=/mnt/boot --force --no-floppy /dev/sda3 So, I now had a system with the Solaris boot loader in the MBR, Solaris specific boot blocks on the Solaris root partition and Ubuntu specific boot blocks in the Ubuntu partition. I just had to chain them together and I was done. Oracle Solaris 11: I have added the following lines to /rpool/boot/grub/menu.lst (be aware of the /rpool!!!!) title Ubuntu 11.10root (hd0,2)makeactivechainloader +1boot The Ubuntu root file system sits on the third partition (/dev/sda3). Ubuntu: I have added the following lines to /etc/grub.d/40_custom: menuentry "Solaris 11/11" {      set root=(hd0,1)      chainloader +1} Two things need to be mentioned: a) grub2 starts numbering partitions with 1; so my /dev/sda1 is partition 1. b) Oracle Solaris boots without the partition being made active (btw: the command to make a partition active with grub2 is "parttool (hd0,1) boot+", which currently does not work for me). As debugging grub is a bit complicated, I used the grub CLI to perform some tests and also used a tool, that I found on sourceforge.net that was able to prepare a list of all boot loaders on all partitions. This told me that the basic setup was correct. Unfortunately I lost it in the live CD environment. I hope this is helpful for some of the readers.Hartmut

    Read the article

  • A Semantic Model For Html: TagBuilder and HtmlTags

    - by Ryan Ohs
    In this post I look into the code smell that is HTML literals and show how we can refactor these pesky strings into a friendlier and more maintainable model.   The Problem When I started writing MVC applications, I quickly realized that I built a lot of my HTML inside HtmlHelpers. As I did this, I ended up moving quite a bit of HTML into string literals inside my helper classes. As I wanted to add more attributes (such as classes) to my tags, I needed to keep adding overloads to my helpers. A good example of this end result is the default html helpers that come with the MVC framework. Too many overloads make me crazy! The problem with all these overloads is that they quickly muck up the API and nobody can remember exactly what order the parameters go in. I've seen many presenters (including members of the ASP.NET MVC team!) stumble before realizing that their view wasn't compiling because they needed one more null parameter in the call to Html.ActionLink(). What if instead of writing Html.ActionLink("Edit", "Edit", null, new { @class = "navigation" }) we could do Html.LinkToAction("Edit").Text("Edit").AddClass("navigation") ? Wouldn't that be much easier to remember and understand?  We can do this if we introduce a semantic model for building our HTML.   What is a Semantic Model? According to Martin Folwer, "a semantic model is an in-memory representation, usually an object model, of the same subject that the domain specific language describes." In our case, the model would be a set of classes that know how to render HTML. By using a semantic model we can free ourselves from dealing with strings and instead output the HTML (typically via ToString()) once we've added all the elements and attributes we desire to the model. There are two primary semantic models available in ASP.NET MVC: MVC 2.0's TagBuilder and FubuMVC's HtmlTags.   TagBuilder TagBuilder is the html builder that is available in ASP.NET MVC 2.0. I'm not a huge fan but it gets the job done -- for simple jobs.  Here's an overview of how to use TagBuilder. See my Tips section below for a few comments on that example. The disadvantage of TagBuilder is that unless you wrap it up with our own classes, you still have to write the actual tag name over and over in your code. eg. new TagBuilder("div") instead of new DivTag(). I also think it's method names are a little too long. Why not have AddClass() instead of AddCssClass() or Text() instead of SetInnerText()? What those methods are doing should be pretty obvious even in the short form. I also don't like that it wants to generate an id attribute from your input instead of letting you set it yourself using external conventions. (See GenerateId() and IdAttributeDotReplacement)). Obviously these come from Microsoft's default approach to MVC but may not be optimal for all programmers.   HtmlTags HtmlTags is in my opinion the much better option for generating html in ASP.NET MVC. It was actually written as a part of FubuMVC but is available as a stand alone library. HtmlTags provides a much cleaner syntax for writing HTML. There are classes for most of the major tags and it's trivial to create additional ones by inheriting from HtmlTag. There are also methods on each tag for the common attributes. For instance, FormTag has an Action() method. The SelectTag class allows you to set the default option or first option independent from adding other options. With TagBuilder there isn't even an abstraction for building selects! The project is open source and always improving. I'll hopefully find time to submit some of my own enhancements soon.   Tips 1) It's best not to have insanely overloaded html helpers. Use fluent builders. 2) In html helpers, return the TagBuilder/tag itself (not a string!) so that you can continue to add attributes outside the helper; see my first sample above. 3) Create a static entry point into your builders. I created a static Tags class that gives me access all the HtmlTag classes I need. This way I don't clutter my code with "new" keywords. eg. Tags.Div returns a new DivTag instance. 4) If you find yourself doing something a lot, create an extension method for it. I created a Nest() extension method that reads much more fluently than the AddChildren() method. It also accepts a params array of tags so I can very easily nest many children.   I hope you have found this post helpful. Join me in my war against HTML literals! I’ll have some more samples of how I use HtmlTags in future posts.

    Read the article

  • Qt static build with static mysql plugin confusion

    - by bdiloreto
    I have built a Qt application which uses the MySQL library, but I am confused by the documentation on static versus shared builds. From the Qt documentation at http://doc.qt.nokia.com/4.7/deployment-windows.html it says: To deploy plugin-based applications we should use the shared library approach. And on http://doc.qt.nokia.com/4.7/deployment.html, it says: Static linking results in a stand-alone executable. The advantage is that you will only have a few files to deploy. The disadvantages are that the executables are large and with no flexibility and that you cannot deploy plugins. To deploy plugin-based applications, you can use the shared library approach. But on http://doc.qt.nokia.com/latest/plugins-howto.html, it seems to say the opposite, giving directions on how to use static plugins: Plugins can be linked statically against your application. If you build the static version of Qt, this is the only option for including Qt's predefined plugins. Using static plugins makes the deployment less error-prone, but has the disadvantage that no functionality from plugins can be added without a complete rebuild and redistribution of the application. ... To link statically against those plugins, you need to use the Q_IMPORT_PLUGIN() macro in your application and you need to add the required plugins to your build using QTPLUGIN. I want to build the Qt libraries statically (for easy deployment) and then use the static MySQL plugin. To do this, I did NOT use the binary distrubtion for Windows. Instead, I've started with the source qt-everywhere-opensource-src-4.7.4 Is the following the correct way to do a static build so that i can use the static MySql plugin? configure -static -debug-and-release -opensource -platform win32-msvc2010 -no-qt3support -no-webkit -no-script -plugin-sql-mysql -I C:\MySQL\include -L C:\MySQL\lib This should build the Qt libraries statically AND the static plugin to be linked at run-time, correct? I would NOT need to build the Mysql Plugin from source separately, correct? If I was to subtitute "-qt-sql-mysql" for "-plugin-sql-mysql" in above, it would include the MySQL driver directly in the QT static libraries, in which case I would NOT need to use the plugin at all, correct? Thanks for making me unconfused!

    Read the article

  • Managed (.net) library with html-tidy like functionality?

    - by Eamon Nerbonne
    Does anybody know of an html cleaner for .NET that can parse html and (for instance) convert it to a more machine friendly format such as xhtml? I've tried the HTML Agility Pack, but that fails to correctly parse even fairly simple examples. To give an example of html that should be parsed correctly: <html><body> <ul><li>TestElem1 <li>TestElem2 <li>TestElem3 List: <ul><li>Nested1 <li>Nested2</li> <li>Nested3 </ul> <li>TestElem4 </ul> <p>paragraph 1 <p>paragraph 2 <p>paragraph 3 </body></html> li tags don't need to be closed (see spec), and neither do P tags. In other words, the above sample should be parsed as: <html><body> <ul><li>TestElem1</li> <li>TestElem2</li> <li>TestElem3 List: <ul><li>Nested1</li> <li>Nested2</li> <li>Nested3</li> </ul></li> <li>TestElem4</li> </ul> <p>paragraph 1</p> <p>paragraph 2</p> <p>paragraph 3</p> </body></html> Since the aim is to use the library on various machines, it's a big disadvantage to need to fall back to native code (such as a wrapper around html tidy) which would require extra deployment hassle and sacrifice platform independance. Any suggestions? To recap, I'm looking for: An html cleaner ala HTML tidy Must be able to deal with real world html, not just xhtml, at the very least correctly reading valid html 4 Must be able to convert to a more easily processable xml format Should be a purely managed app.

    Read the article

  • html widget communicating with server

    - by Nikita Rybak
    I'm making html widget for websites. Let's say, it will display current stock indexes. In short, arbitrary website owner takes code snippet from me and includes it on his webpage http://website.com/index.html. When arbitrary user opens http://website.com/index.html, my code sends request to my server (provider.com), which performs necessary operations and returns information to user's browser. When response has arrived, user will see relevant stock value on http://website.com/index.html. In index.html service could be called like this <script type="text/javascript" src="provider.com/service.js"> </script> <div id="target_area"></div> <script type="text/javascript"> service.show("target_area", options); </script> Now, the problem is in the same origin policy: I can't just send ajax request from website.com to provided.com and return html to embed in client's webpage. I see several solutions, which I list below, but none quite satisfy me. I wonder, if you could suggest something, especially if you had some relevant experience. 1) iframe, plain and simple. Disadvantage: must have fixed dimensions + stupid scroll bars appearing in some browsers. Can be fixed with javascript, but all this browser-specific tinkering doesn't sound good to me. 2) JSONP. Problem: can't return whole chunk of html, must return only data. Then, on browser side, I'll have to use javascript to embed data into html snippet placed statically in index.html. Doesn't sound nice, because data format is not very simple and may even change later. 3) Use hidden iframe to do ajax requests. A bit tricky, but sounds like a way to go. Well, that's my thoughts on the subject. Are there any better ways? BTW, I tried to check some existing widgets too, but didn't find much useful information. All domain names used in this text are fictional and any resemblance is purely coincidental :)

    Read the article

  • Where to put a glossary of important terms and patterns in documentation?

    - by Tetha
    Greetings. I want to document certain patterns in the code in order to build up a consistent terminology (in order to easen communication about the software). I am, however, unsure, where to define the terms given. In order to get on the same level, an example: I have a code generator. This code generator receives a certain InputStructure from the Parser (yes, the name InputStructure might be less than ideal). This InputStructure is then transformed into various subsequent datastructures (like an abstract description of the validation process). Each of these datastructures can be either transformed into another value of the same datastructure or it can be transformed into the next datastructure. This should sound like Pipes and Filters to some degree. Given this, I call an operation which takes a datastructures and constructs a value of the same datastructure a transformation, while I call an operation which takes a datastructure and produces a different follow-up datastructure a derivation. The final step of deriving a string containing code is called emitting. (So, overall, the codegenerator takes the input-structure and transforms, transforms, derives, transforms, derives and finally emits). I think emphasizing these terms will be benefitial in communications, because then it is easy to talk about things. If you hear "transformation", you know "Ok, I only need to think about these two datastructures", if you hear "emitting", you know "Ok, I only need to know this datastructure and the target language.". However, where do I document these patterns? The current code base uses visitors and offers classes called like ValidatorTransformationBase<ResultType> (or InputStructureTransformationBase<ResultType>, and so one and so on). I do not really want to add the definition of such terms to the interfaces, because in that case, I'd have to repeat myself on each and every interface, which clearly violates DRY. I am considering to emphasize the distinction between Transformations and Derivations by adding further interfaces (I would have to think about a better name for the TransformationBase-classes, but then I could do thinks like ValidatorTransformation extends ValidatorTransformationBase<Validator>, or ValidatorDerivationFromInputStructure extends InputStructureTransformation<Validator>). I also think I should add a custom page to the doxygen documentation already existing, as in "Glossary" or "Architecture Principles", which contains such principles. The only disadvantage of this would be that a contributor will need to find this page in order to actually learn about this. Am I missing possibilities or am I judging something wrong here in your opinion? -- Regards, Tetha

    Read the article

  • Rails ActiveResource Associations

    - by brad
    I have some ARes models (see below) that I'm trying to use associations with (which seems to be wholly undocumented and maybe not possible but I thought I'd give it a try) So on my service side, my ActiveRecord object will render something like render :xml => @group.to_xml(:include => :customers) (see generated xml below) The models Group and Customers are HABTM On my ARes side, I'm hoping that it can see the <customers> xml attribute and automatically populate the .customers attribute of that Group object , but the has_many etc methods aren't supported (at least as far as I can tell) So I'm wondering how ARes does it's reflection on the XML to set the attributes of an object. In AR for instance I could create a def customers=(customer_array) and set it myself, but this doesn't seem to work in ARes. One suggestion I found for an "association" is the just have a method def customers Customer.find(:all, :conditions => {:group_id => self.id}) end But this has the disadvantage that it makes a second service call to look up those customers... not cool I'd like my ActiveResource model to see that the customers attribute in the XML and automatically populate my model. Anyone have any experience with this?? # My Services class Customer < ActiveRecord::Base has_and_belongs_to_many :groups end class Group < ActiveRecord::Base has_and_belongs_to_many :customer end # My ActiveResource accessors class Customer < ActiveResource::Base; end class Group < ActiveResource::Base; end # XML from /groups/:id?customers=true <group> <domain>some.domain.com</domain> <id type="integer">266</id> <name>Some Name</name> <customers type="array"> <customer> <active type="boolean">true</active> <id type="integer">1</id> <name>Some Name</name> </customer> <customer> <active type="boolean" nil="true"></active> <id type="integer">306</id> <name>Some Other Name</name> </customer> </customers> </group>

    Read the article

  • Advantage of creating a generic repository vs. specific repository for each object?

    - by LuckyLindy
    We are developing an ASP.NET MVC application, and are now building the repository/service classes. I'm wondering if there are any major advantages to creating a generic IRepository interface that all repositories implement, vs. each Repository having its own unique interface and set of methods. For example: a generic IRepository interface might look like (taken from this answer): public interface IRepository : IDisposable { T[] GetAll<T>(); T[] GetAll<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter); T GetSingle<T>(Expression<Func<T, bool>> filter, List<Expression<Func<T, object>>> subSelectors); void Delete<T>(T entity); void Add<T>(T entity); int SaveChanges(); DbTransaction BeginTransaction(); } Each Repository would implement this interface (e.g. CustomerRepository:IRepository, ProductRepository:IRepository, etc). The alternate that we've followed in prior projects would be: public interface IInvoiceRepository : IDisposable { EntityCollection<InvoiceEntity> GetAllInvoices(int accountId); EntityCollection<InvoiceEntity> GetAllInvoices(DateTime theDate); InvoiceEntity GetSingleInvoice(int id, bool doFetchRelated); InvoiceEntity GetSingleInvoice(DateTime invoiceDate, int accountId); //unique InvoiceEntity CreateInvoice(); InvoiceLineEntity CreateInvoiceLine(); void SaveChanges(InvoiceEntity); //handles inserts or updates void DeleteInvoice(InvoiceEntity); void DeleteInvoiceLine(InvoiceLineEntity); } In the second case, the expressions (LINQ or otherwise) would be entirely contained in the Repository implementation, whoever is implementing the service just needs to know which repository function to call. I guess I don't see the advantage of writing all the expression syntax in the service class and passing to the repository. Wouldn't this mean easy-to-messup LINQ code is being duplicated in many cases? For example, in our old invoicing system, we call InvoiceRepository.GetSingleInvoice(DateTime invoiceDate, int accountId) from a few different services (Customer, Invoice, Account, etc). That seems much cleaner than writing the following in multiple places: rep.GetSingle(x => x.AccountId = someId && x.InvoiceDate = someDate.Date); The only disadvantage I see to using the specific approach is that we could end up with many permutations of Get* functions, but this still seems preferable to pushing the expression logic up into the Service classes. What am I missing?

    Read the article

  • How can I model the data in a multi-language data editor in WPF with MVVM?

    - by Patrick Szalapski
    Are there any good practices to follow when designing a model/ViewModel to represent data in an app that will view/edit that data in multiple languages? Our top-level class--let's call it Course--contains several collection properties, say Books and TopicsCovered, which each might have a collection property among its data. For example, the data needs to represent course1.Books.First().Title in different languages, and course1.TopicsCovered.First().Name in different languages. We want a app that can edit any of the data for one given course in any of the available languages--as well as edit non-language-specific data, perhaps the Author of a Book (i.e. course1.Books.First().Author). We are having trouble figuring out how best to set up the model to enable binding in the XAML view. For example, do we replace (in the single-language model) each String with a collection of LanguageSpecificString instances? So to get the author in the current language: course1.Books.First().Author.Where(Function(a) a.Language = CurrentLanguage).SingleOrDefault If we do that, we cannot easily bind to any value in one given language, only to the collection of language values such as in an ItemsControl. <TextBox Text={Binding Author.???} /> <!-- no way to bind to the current language author --> Do we replace the top-level Course class with a collection of language-specific Courses? So to get the author in the current language: course1.GetLanguage(CurrentLanguage).Books.First.Author If we do that, we can only easily work with one language at a time; we might want a view to show one language and let the user edit the other. <TextBox Text={Binding Author} /> <!-- good --> <TextBlock Text={Binding ??? } /> <!-- no way to bind to the other language author --> Also, that has the disadvantage of not representing language-neutral data as such; every property (such as Author) would seem to be in multiple languages. Even non-string properties would be in multiple languages. Is there an option in between those two? Is there another way that we aren't thinking of? I realize this is somewhat vague, but it would seem to be a somewhat common problem to design for. Note: This is not a question about providing a multilingual UI, but rather about actually editing multi-language data in a flexible way.

    Read the article

  • ASP.NET MVC 2 AJAX dilemma: Lose Models concept or create unmanageable JavaScript

    - by Slightly Frustrated
    Hi, Ok, let's assume we are working with ASP.NET MVC 2 (latest and greatest preview) and we want to create AJAX user interface with jQuery. So what are our real options here? Option 1 - Pass Json from the Controller to the view, and then the view submits Json back to the controller. This means (in the order given): User opens some View (let's say - /Invoices/January) which has to visualize a list of data (e.g. <IEnumerable<X.Y.Z.Models.Invoice>>) Controller retrieves the Model from the repository (assuming we are using repository pattern). Controller creates a new instance of a class which we will serialize to Json. The reasaon we do this, is because the model may not be serializable (circular reference ftl) Controller populates the soon-to-be-serialized class with data Controller serializes the class to Json and passes it the view. User does some change and submits the 'form' The View submits back Json to the controller The Controller now must 'manually' validate the input, because the Json passed does not bind to a Model See, if our View is communicating to the controller via Json, we lose the Model validation, which IMHO is incredible disadvantage. In this case, forget about data annotations and stuff. Option 2 - Ok, the alternative of the first approach is to pass the Models to the Views, which is the default behavior in the template when you start a new project. We pass a strong typed model to the view The view renders the appropriate html and javascript, sticking to the model property names. This is important! The user submits the form. If we stick to the model names, when we .serialize() the form and submit it to the controller it will map to a model. There is no Json mapping. The submitted form directly binds to a strongly typed model, hence, we can use the model validation. E.g. we keep the business logic where it should be. Problem with this approach is, if we refactor some of the Models (change property names, types, etc), the javascript we wrote would become invalid. We will have to manually refactor the scripting and hope we don't miss something. There is no way you can test it either. Ok, the question is - how to write an AJAX front end, which keeps the business logic validation in the model (e.g. controller passes and receives a Model type), but in the same time doesn't screw up the javascript and html when we refactor the model?

    Read the article

  • Are there any drawbacks to class-based Javascript injection?

    - by jonathanconway
    A phenomena I'm seeing more and more of is Javascript code that is tied to a particular element on a particular page, rather than being tied to kinds of elements or UI patterns. For example, say we had a couple of animated menus on a page: <ul id="top-navigation"> ... </ul> <!-- ... --> <ul id="product-list"> ... </ul> These two menus might exist on the same page or on different pages, and some pages mightn't have any menus. I'll often see Javascript code like this (for these examples, I'm using jQuery): $(document).ready(function() { $('ul#top-navigation').dropdownMenu(); $('ul#product-selector').dropdownMenu(); }); Notice the problem? The Javascript is tightly coupled to particular instances of a UI pattern rather than the UI pattern itself. Now wouldn't it be so much simpler (and cleaner) to do this instead? - $(document).ready(function() { $('ul.dropdown-menu').dropdownMenu(); }); Then we can put the 'dropdown-menu' class on our lists like so: <ul id="top-navigation" class="dropdown-menu"> ... </ul> <!-- ... --> <ul id="product-list" class="dropdown-menu"> ... </ul> This way of doing things would have the following benefits: Simpler Javascript - we only need to attach once to the class. We avoid looking for specific instances that mightn't exist on a given page. If we remove an element, we don't need to hunt through the Javascript to find the attach code for that element. I believe techniques similar to this were pioneered by certain articles on alistapart.com. I'm amazed these simple techniques still haven't gained widespread adoption, and I still see 'best-practice' code-samples and Javascript frameworks referring directly to UI instances rather than UI patterns. Is there any reason for this? Is there some big disadvantage to the technique I just described that I'm unaware of?

    Read the article

  • Is the recent trend toward widescreen (16:9) computer monitors a plus or minus for programmers?

    - by DanM
    It's almost gotten to the point where you can't buy a conventional (4:3) monitor anymore. Pretty much everything is widescreen. This is fine for watching movies or TV, but is it good or bad for programming? My initial thoughts on the issue are that widescreens are a net negative for programmers. Here are some of the disadvantages I see: Poor space utiliziation One disadvantage of widescreens you can't argue with is that they offer poor space utilization for the amount of total pixels you get. For example, my Thinkpad, which I bought just before the widescreen craze, has a 15" monitor with a native resolution of 1600 x 1200. The newer 15.4" Thinkpads run at most 1680 x 1050. So (if you do the math) you get fewer pixels in a wider (but not shorter) package. With desktop monitors, you pay a price in terms of desk space used. Two 1680 x 1050 monitors will simply take up more of your desk than two 1600 x 1200 monitors (assuming equal dot pitch). More scrolling If you compare a 1680 x 1050 monitor to a 1600 x 1200 monitor, you get 80 extra pixels of width but 150 fewer pixels of height. The height reduction means you lose approximately 11 lines of code. That's less you can see on the screen at one time and more scrolling you have to do. This harms productivity, maybe not dramatically, but insidiously. Less room for wide panels Widescreens also mean you lose space for wide but short panels common in programming environments. If you use Visual Studio, for example, your code window will be that much shorter when viewing the Find Results, Task List, or Error List (all of which I use frequently). This isn't to say the 80 pixels of extra width you get with widescreen would never be useful, but I tend to keep my lines of code short, so seeing more lines would be more valuable to me than seeing fewer, longer lines. What do you think? Do you agree/disagree? Are you now using one or more widescreen monitors for development? What resolution are you running on each? Do you ever miss the height of the traditional 4:3 monitor? Would you complain if your monitors were one inch narrower but two inches taller?

    Read the article

  • [N]Hibernate: view-like fetching properties of associated class

    - by chiccodoro
    (Felt quite helpless in formulating an appropriate title...) In my C# app I display a list of "A" objects, along with some properties of their associated "B" objects and properties of B's associated "C" objects: A.Name B.Name B.SomeValue C.Name Foo Bar 123 HelloWorld Bar Hello 432 World ... To clarify: A has an FK to B, B has an FK to C. (Such as, e.g. BankAccount - Person - Company). I have tried two approaches to load these properties from the database (using NHibernate): A fast approach and a clean approach. My eventual question is how to do a fast & clean approach. Fast approach: Define a view in the database which joins A, B, C and provides all these fields. In the A class, define properties "BName", "BSomeValue", "CName" Define a hibernate mapping between A and the View, whereas the needed B and C properties are mapped with update="false" insert="false" and do actually stem from B and C tables, but Hibernate is not aware of that since it uses the view. This way, the listing only loads one object per "A" record, which is quite fast. If the code tries to access the actual associated property, "A.B", I issue another HQL query to get B, set the property and update the faked BName and BSomeValue properties as well. Clean approach: There is no view. Class A is mapped to table A, B to B, C to C. When loading the list of A, I do a double left-join-fetch to get B and C as well: from A a left join fetch a.B left join fetch a.B.C B.Name, B.SomeValue and C.Name are accessed through the eagerly loaded associations. The disadvantage of this approach is that it gets slower and takes more memory, since it needs to created and map 3 objects per "A" record: An A, B, and C object each. Fast and clean approach: I feel somehow uncomfortable using a database view that hides a join and treat that in NHibernate as if it was a table. So I would like to do something like: Have no views in the database. Declare properties "BName", "BSomeValue", "CName" in class "A". Define the mapping for A such that NHibernate fetches A and these properties together using a join SQL query as a database view would do. The mapping should still allow for defining lazy many-to-one associations for getting A.B.C My questions: Is this possible? Is it [un]artful? Is there a better way?

    Read the article

  • Java replacement for C macros

    - by thkala
    Recently I refactored the code of a 3rd party hash function from C++ to C. The process was relatively painless, with only a few changes of note. Now I want to write the same function in Java and I came upon a slight issue. In the C/C++ code there is a C preprocessor macro that takes a few integer variables names as arguments and performs a bunch of bitwise operations with their contents and a few constants. That macro is used in several different places, therefore its presence avoids a fair bit of code duplication. In Java, however, there is no equivalent for the C preprocessor. There is also no way to affect any basic type passed as an argument to a method - even autoboxing produces immutable objects. Coupled with the fact that Java methods return a single value, I can't seem to find a simple way to rewrite the macro. Avenues that I considered: Expand the macro by hand everywhere: It would work, but the code duplication could make things interesting in the long run. Write a method that returns an array: This would also work, but it would repeatedly result into code like this: long tmp[] = bitops(k, l, m, x, y, z); k = tmp[0]; l = tmp[1]; m = tmp[2]; x = tmp[3]; y = tmp[4]; z = tmp[5]; Write a method that takes an array as an argument: This would mean that all variable names would be reduced to array element references - it would be rather hard to keep track of which index corresponds to which variable. Create a separate class e.g. State with public fields of the appropriate type and use that as an argument to a method: This is my current solution. It allows the method to alter the variables, while still keeping their names. It has the disadvantage, however, that the State class will get more and more complex, as more macros and variables are added, in order to avoid copying values back and forth among different State objects. How would you rewrite such a C macro in Java? Is there a more appropriate way to deal with this, using the facilities provided by the standard Java 6 Development Kit (i.e. without 3rd party libraries or a separate preprocessor)?

    Read the article

  • Is throwing an exception a healthy way to exit?

    - by ramaseshan
    I have a setup that looks like this. class Checker { // member data Results m_results; // see below public: bool Check(); private: bool Check1(); bool Check2(); // .. so on }; Checker is a class that performs lengthy check computations for engineering analysis. Each type of check has a resultant double that the checker stores. (see below) bool Checker::Check() { // initilisations etc. Check1(); Check2(); // ... so on } A typical Check function would look like this: bool Checker::Check1() { double result; // lots of code m_results.SetCheck1Result(result); } And the results class looks something like this: class Results { double m_check1Result; double m_check2Result; // ... public: void SetCheck1Result(double d); double GetOverallResult() { return max(m_check1Result, m_check2Result, ...); } }; Note: all code is oversimplified. The Checker and Result classes were initially written to perform all checks and return an overall double result. There is now a new requirement where I only need to know if any of the results exceeds 1. If it does, subsequent checks need not be carried out(it's an optimisation). To achieve this, I could either: Modify every CheckN function to keep check for result and return. The parent Check function would keep checking m_results. OR In the Results::SetCheckNResults(), throw an exception if the value exceeds 1 and catch it at the end of Checker::Check(). The first is tedious, error prone and sub-optimal because every CheckN function further branches out into sub-checks etc. The second is non-intrusive and quick. One disadvantage is I can think of is that the Checker code may not necessarily be exception-safe(although there is no other exception being thrown anywhere else). Is there anything else that's obvious that I'm overlooking? What about the cost of throwing exceptions and stack unwinding? Is there a better 3rd option?

    Read the article

  • Mulit-tenant ASP.NET MVC – Controllers

    - by zowens
    Part I – Introduction Part II – Foundation   The time has come to talk about controllers in a multi-tenant ASP.NET MVC architecture. This is actually the most critical design decision you will make when dealing with multi-tenancy with MVC. In my design, I took into account the design goals I mentioned in the introduction about inversion of control and what a tenant is to my design. Be aware that this is only one way to achieve multi-tenant controllers.   The Premise MvcEx (which is a sample written by Rob Ashton) utilizes dynamic controllers. Essentially a controller is “dynamic” in that multiple action results can be placed in different “controllers” with the same name. This approach is a bit too complicated for my design. I wanted to stick with plain old inheritance when dealing with controllers. The basic premise of my controller design is that my main host defines a set of universal controllers. It is the responsibility of the tenant to decide if the tenant would like to utilize these core controllers. This can be done either by straight usage of the controller or inheritance for extension of the functionality defined by the controller. The controller is resolved by a StructureMap container that is attached to the tenant, as discussed in Part II.   Controller Resolution I have been thinking about two different ways to resolve controllers with StructureMap. One way is to use named instances. This is a really easy way to simply pull the controller right out of the container without a lot of fuss. I ultimately chose not to use this approach. The reason for this decision is to ensure that the controllers are named properly. If a controller has a different named instance that the controller type, then the resolution has a significant disconnect and there are no guarantees. The final approach, the one utilized by the sample, is to simply pull all controller types and correlate the type with a controller name. This has a bit of a application start performance disadvantage, but is significantly more approachable for maintainability. For example, if I wanted to go back and add a “ControllerName” attribute, I would just have to change the ControllerFactory to suit my needs.   The Code The container factory that I have built is actually pretty simple. That’s really all we need. The most significant method is the GetControllersFor method. This method makes the model from the Container and determines all the concrete types for IController.  The thing you might notice is that this doesn’t depend on tenants, but rather containers. You could easily use this controller factory for an application that doesn’t utilize multi-tenancy. public class ContainerControllerFactory : IControllerFactory { private readonly ThreadSafeDictionary<IContainer, IDictionary<string, Type>> typeCache; public ContainerControllerFactory(IContainerResolver resolver) { Ensure.Argument.NotNull(resolver, "resolver"); this.ContainerResolver = resolver; this.typeCache = new ThreadSafeDictionary<IContainer, IDictionary<string, Type>>(); } public IContainerResolver ContainerResolver { get; private set; } public virtual IController CreateController(RequestContext requestContext, string controllerName) { var controllerType = this.GetControllerType(requestContext, controllerName); if (controllerType == null) return null; var controller = this.ContainerResolver.Resolve(requestContext).GetInstance(controllerType) as IController; // ensure the action invoker is a ContainerControllerActionInvoker if (controller != null && controller is Controller && !((controller as Controller).ActionInvoker is ContainerControllerActionInvoker)) (controller as Controller).ActionInvoker = new ContainerControllerActionInvoker(this.ContainerResolver); return controller; } public void ReleaseController(IController controller) { if (controller != null && controller is IDisposable) ((IDisposable)controller).Dispose(); } internal static IEnumerable<Type> GetControllersFor(IContainer container) { Ensure.Argument.NotNull(container); return container.Model.InstancesOf<IController>().Select(x => x.ConcreteType).Distinct(); } protected virtual Type GetControllerType(RequestContext requestContext, string controllerName) { Ensure.Argument.NotNull(requestContext, "requestContext"); Ensure.Argument.NotNullOrEmpty(controllerName, "controllerName"); var container = this.ContainerResolver.Resolve(requestContext); var typeDictionary = this.typeCache.GetOrAdd(container, () => GetControllersFor(container).ToDictionary(x => ControllerFriendlyName(x.Name))); Type found = null; if (typeDictionary.TryGetValue(ControllerFriendlyName(controllerName), out found)) return found; return null; } private static string ControllerFriendlyName(string value) { return (value ?? string.Empty).ToLowerInvariant().Without("controller"); } } One thing to note about my implementation is that we do not use namespaces that can be utilized in the default ASP.NET MVC controller factory. This is something that I don’t use and have no desire to implement and test. The reason I am not using namespaces in this situation is because each tenant has its own namespaces and the routing would not make sense in this case.   Because we are using IoC, dependencies are automatically injected into the constructor. For example, a tenant container could implement it’s own IRepository and a controller could be defined in the “main” project. The IRepository from the tenant would be injected into the main project’s controller. This is quite a useful feature.   Again, the source code is on GitHub here.   Up Next Up next is the view resolution. This is a complicated issue, so be prepared. I hope that you have found this series useful. If you have any questions about my implementation so far, send me an email or DM me on Twitter. I have had a lot of great conversations about multi-tenancy so far and I greatly appreciate the feedback!

    Read the article

  • Who IS Brian Solis?

    - by Michael Snow
    Q: Brian, Welcome to the WebCenter Blog. Can you tell our readers your current role and what career path brought you here? A: I’m proudly serving as a principal analyst at Altimeter Group, a research based advisory firm in Silicon Valley. My career path, well, let’s just say it’s a long and winding road. As a kid, I was fascinated with technology. I learned programming at an early age and found myself naturally drawn to all things tech. I started my career as a database programmer at a technology marketing agency in Southern California. When I saw the chance to work with tech companies and help them better market their capabilities to businesses and consumers, I switched focus from programming to marketing and advertising. As technologist, my approach to marketing was different. I didn’t believe in hype, fluff or buzz words. I believed in translating features into benefits and specifications and capabilities into solutions for real world problems and opportunities. In the mid 90’s I experimented with direct to consumer/customer engagement in dedicated technology forums and boards. I quickly realized that the entire approach to do so would need to change. Therefore, I learned and developed new methods for a more social and informed way of engaging people in ways that helped them, marketed the company, and also tied to tangible benefits for the company. This work would lead me to start an agency in 1999 dedicated to interactive marketing. As I continued to experiment with interactive platforms, I developed interesting methods for converting one-to-many forms of media into one-to-one-to-many programs. I ran that company until joining Altimeter Group. Along the way, in the early 2000s, I realized that everything was changing and that there were others like me finding success in what would become a more social form of media. I dedicated a significant amount of my time to sharing everything that I learned in the form of articles, blogs, and eventually books. My mission became to share my experience with anyone who’d listen. It would later become much bigger than marketing, this would lead to a decade of work, that still continues, in business transformation. Then and now, I find myself always assuming the role of a student. Q: As an industry analyst & technology change evangelist, what are you primarily focused on these days? A: As a digital analyst, I study how disruptive technology impacts business. As an aspiring social scientist, I study how technology affects human behavior. I explore both horizons professionally and personally to better understand the future of popular culture and also the opportunities that exist for organizations to improve relationships and experiences with customers and the people that are important to them. Q: People cite that the line between work and life is getting more and more blurred. Do you see your personal life influencing your professional work? A: The line between work and life isn’t blurred it’s been overtly crossed and erased. We live in an always on society. The digital lifestyle keeps us connected to one another it keeps us connected all the time. Whether your sending or checking email, trying to catch up, or simply trying to get ahead, people are spending the equivalent of an extra day at work in the time they spend out of work…working. That’s absurd. It’s a matter of survival. It’s also a matter of unintended, subconscious self-causation. We brought this on ourselves and continue to do so. Think about your day. You’re in meetings for the better part of each day. You probably spend evenings and weekends catching up on email and actually doing the work you couldn’t get to during the day. And, your co-workers and executives are doing the same thing. So if you try to slow down, you find yourself at a disadvantage as you’re willfully pulling yourself out of an unfortunate culture of whenever wherever business dynamics. If you’re unresponsive or unreachable, someone within your organization or on your team is accessible. Over time, this could contribute to unfavorable impressions. I choose to steer my life balance in ways that complement one another. But, I don’t pretend to have this figured out by any means. In fact, I find myself swimming upstream like those around me. It’s essentially a competition for relevance and at some point I’ll learn how to earn attention and relevance while redrawing the line between work and life. Q: How can people keep up with what you’re working on? A: The easy answer is that people can keep up with me at briansolis.com. But, I also try to reach people where their attention is focused. Whether it’s Facebook (facebook.com/briansolis), Twitter (@briansolis), Google+ (+briansolis), Youtube (briansolis.tv) or through books and conferences, people can usually find me in a place of their choosing. Q: Recently, you’ve been working with us here at Oracle on something exciting coming up later this week. What’s on the horizon? A: I spent some time with the Oracle team reviewing the idea of Digital Darwinism and how technology and society are evolving faster than many organizations can adapt. Digital Darwinism: How Brands Can Survive the Rapid Evolution of Society and Technology Thursday, December 13, 2012, 10 a.m. PT / 1 p.m. ET Q: You’ve been very actively pursued for media interviews and conference and company speaking engagements – anything you’d like to share to give us a sneak peak of what to expect on Thursday’s webcast? A: We’re inviting guests to join us online as we dive into the future of business and how the convergence of technology and connected consumerism would ultimately impact how business is done. It’ll be an exciting and revealing conversation that explores just how much everything is changing. We’ll also review the importance of adapting to emergent trends and how to compete for the future. It’s important to recognize that change is not happening to us, it’s happening because of us. We are part of the revolution and therefore we need to help organizations adapt from the inside out. Watch the Entire Oracle Social Business Thought Leaders Webcast Series On-Demand and Stay Tuned for More to Come in 2013!

    Read the article

  • Beyond Chatting: What ‘Social’ Means for CRM

    - by Natalia Rachelson
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;} A guest post by Steve Diamond, Senior Director, Outbound Product Management, Oracle In a recent post on this blog, my colleague Steve Boese asked three questions related to the widespread popularity and incredibly rapid growth of Facebook, Pinterest, and LinkedIn. Steve then addressed the many applications for collaborative solutions in the area of Human Capital Management. So, in turning to a conversation about Customer Relationship Management (CRM) and Sales Force Automation (SFA), let me ask you one simple question. How many sales people, particularly at business-to-business companies, consistently meet or beat their quotas in their roles by working alone, with no collaboration among fellow sales people, sales executives, employees in product groups, in service, in Legal, third-party partners, etc.? Hello? Is anybody out there? What’s that cricket noise I hear? That’s correct. Nobody! When it comes to Sales, introverts arguably have a distinct disadvantage. While it’s certainly a truism that “success” in most professional endeavors requires working with people, it’s a mandatory success factor in Sales. This fact became abundantly clear to me one early morning in the late 1990s when I joined the former Hyperion Solutions (now part of Oracle) and attended a Sales Award Ceremony. The Head of Sales at that time gave out dozens of awards – none of them to individuals and all of them to TEAMS of individuals. That’s how it works in Sales. Your colleagues help provide you with product intelligence and competitive intelligence. They help you build the best presentations, pitches, and proposals. They help you develop the most killer RFPs. They align you with the best product people to ensure you’re matching the best products for the opportunity and join you in critical meetings. They help knock the socks of your prospects in “bake off” demo’s. They bring in the best partners to either add complementary products to your opportunity or help you implement a solution. They work with you as a collective team. And so how is all this collaboration STILL typically done today? Through email. And yet we all silently or not so silently grimace about email. It’s relatively siloed. It’s painful to search. It’s difficult to align by topic. And it’s nearly impossible to re-trace meaningful and helpful conversations that occurred among a group or a team at some point in history. This is where social networking for Sales comes into play. It’s about PURPOSEFUL social networking versus chattering. What is purposeful social networking? It’s collaboration that’s built around opportunities, accounts, and contacts. It’s collaboration that delivers valuable context – on the target company, and on key competitors – just to name two examples. It’s collaboration that can scale to provide coaching for larger numbers of sales representatives, both for general purposes, and as we’ve largely discussed here, for specific ‘deals.’ And it’s collaboration that allows a team of people to collectively edit and iterate on a document like an RFP or a soon-to-be killer presentation that is maintained in a central repository, with no time wasted searching for it or worrying about version control. But lest we get carried away, let’s remember that collaboration “happens” among sales people whether there is specialized software to support it or not. The human practice of sales has not changed much in the last 80 to 90 years. Collaboration has been a mainstay during this entire time. But what social networking in general, and Oracle Social Networking in particular delivers, is the opportunity for sales teams to dramatically increase their effectiveness and efficiency – to identify and close more high quality and lucrative opportunities more quickly. For most sales organizations, this is how the game is won. To learn more please visit Oracle Social Network and Oracle Fusion Customer Relationship Management on oracle.com Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • SQL SERVER – Core Concepts – Elasticity, Scalability and ACID Properties – Exploring NuoDB an Elastically Scalable Database System

    - by pinaldave
    I have been recently exploring Elasticity and Scalability attributes of databases. You can see that in my earlier blog posts about NuoDB where I wanted to look at Elasticity and Scalability concepts. The concepts are very interesting, and intriguing as well. I have discussed these concepts with my friend Joyti M and together we have come up with this interesting read. The goal of this article is to answer following simple questions What is Elasticity? What is Scalability? How ACID properties vary from NOSQL Concepts? What are the prevailing problems in the current database system architectures? Why is NuoDB  an innovative and welcome change in database paradigm? Elasticity This word’s original form is used in many different ways and honestly it does do a decent job in holding things together over the years as a person grows and contracts. Within the tech world, and specifically related to software systems (database, application servers), it has come to mean a few things - allow stretching of resources without reaching the breaking point (on demand). What are resources in this context? Resources are the usual suspects – RAM/CPU/IO/Bandwidth in the form of a container (a process or bunch of processes combined as modules). When it is about increasing resources the simplest idea which comes to mind is the addition of another container. Another container means adding a brand new physical node. When it is about adding a new node there are two questions which comes to mind. 1) Can we add another node to our software system? 2) If yes, does adding new node cause downtime for the system? Let us assume we have added new node, let us see what the new needs of the system are when a new node is added. Balancing incoming requests to multiple nodes Synchronization of a shared state across multiple nodes Identification of “downstate” and resolution action to bring it to “upstate” Well, adding a new node has its advantages as well. Here are few of the positive points Throughput can increase nearly horizontally across the node throughout the system Response times of application will increase as in-between layer interactions will be improved Now, Let us put the above concepts in the perspective of a Database. When we mention the term “running out of resources” or “application is bound to resources” the resources can be CPU, Memory or Bandwidth. The regular approach to “gain scalability” in the database is to look around for bottlenecks and increase the bottlenecked resource. When we have memory as a bottleneck we look at the data buffers, locks, query plans or indexes. After a point even this is not enough as there needs to be an efficient way of managing such large workload on a “single machine” across memory and CPU bound (right kind of scheduling)  workload. We next move on to either read/write separation of the workload or functionality-based sharing so that we still have control of the individual. But this requires lots of planning and change in client systems in terms of knowing where to go/update/read and for reporting applications to “aggregate the data” in an intelligent way. What we ideally need is an intelligent layer which allows us to do these things without us getting into managing, monitoring and distributing the workload. Scalability In the context of database/applications, scalability means three main things Ability to handle normal loads without pressure E.g. X users at the Y utilization of resources (CPU, Memory, Bandwidth) on the Z kind of hardware (4 processor, 32 GB machine with 15000 RPM SATA drives and 1 GHz Network switch) with T throughput Ability to scale up to expected peak load which is greater than normal load with acceptable response times Ability to provide acceptable response times across the system E.g. Response time in S milliseconds (or agreed upon unit of measure) – 90% of the time The Issue – Need of Scale In normal cases one can plan for the load testing to test out normal, peak, and stress scenarios to ensure specific hardware meets the needs. With help from Hardware and Software partners and best practices, bottlenecks can be identified and requisite resources added to the system. Unfortunately this vertical scale is expensive and difficult to achieve and most of the operational people need the ability to scale horizontally. This helps in getting better throughput as there are physical limits in terms of adding resources (Memory, CPU, Bandwidth and Storage) indefinitely. Today we have different options to achieve scalability: Read & Write Separation The idea here is to do actual writes to one store and configure slaves receiving the latest data with acceptable delays. Slaves can be used for balancing out reads. We can also explore functional separation or sharing as well. We can separate data operations by a specific identifier (e.g. region, year, month) and consolidate it for reporting purposes. For functional separation the major disadvantage is when schema changes or workload pattern changes. As the requirement grows one still needs to deal with scale need in manual ways by providing an abstraction in the middle tier code. Using NOSQL solutions The idea is to flatten out the structures in general to keep all values which are retrieved together at the same store and provide flexible schema. The issue with the stores is that they are compromising on mostly consistency (no ACID guarantees) and one has to use NON-SQL dialect to work with the store. The other major issue is about education with NOSQL solutions. Would one really want to make these compromises on the ability to connect and retrieve in simple SQL manner and learn other skill sets? Or for that matter give up on ACID guarantee and start dealing with consistency issues? Hybrid Deployment – Mac, Linux, Cloud, and Windows One of the challenges today that we see across On-premise vs Cloud infrastructure is a difference in abilities. Take for example SQL Azure – it is wonderful in its concepts of throttling (as it is shared deployment) of resources and ability to scale using federation. However, the same abilities are not available on premise. This is not a mistake, mind you – but a compromise of the sweet spot of workloads, customer requirements and operational SLAs which can be supported by the team. In today’s world it is imperative that databases are available across operating systems – which are a commodity and used by developers of all hues. An Ideal Database Ability List A system which allows a linear scale of the system (increase in throughput with reasonable response time) with the addition of resources A system which does not compromise on the ACID guarantees and require developers to learn new paradigms A system which does not force fit a new way interacting with database by learning Non-SQL dialect A system which does not force fit its mechanisms for providing availability across its various modules. Well NuoDB is the first database which has all of the above abilities and much more. In future articles I will cover my hands-on experience with it. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: NuoDB

    Read the article

  • Computer Networks UNISA - Chap 10 &ndash; In Depth TCP/IP Networking

    - by MarkPearl
    After reading this section you should be able to Understand methods of network design unique to TCP/IP networks, including subnetting, CIDR, and address translation Explain the differences between public and private TCP/IP networks Describe protocols used between mail clients and mail servers, including SMTP, POP3, and IMAP4 Employ multiple TCP/IP utilities for network discovery and troubleshooting Designing TCP/IP-Based Networks The following sections explain how network and host information in an IPv4 address can be manipulated to subdivide networks into smaller segments. Subnetting Subnetting separates a network into multiple logically defined segments, or subnets. Networks are commonly subnetted according to geographic locations, departmental boundaries, or technology types. A network administrator might separate traffic to accomplish the following… Enhance security Improve performance Simplify troubleshooting The challenges of Classful Addressing in IPv4 (No subnetting) The simplest type of IPv4 is known as classful addressing (which was the Class A, Class B & Class C network addresses). Classful addressing has the following limitations. Restriction in the number of usable IPv4 addresses (class C would be limited to 254 addresses) Difficult to separate traffic from various parts of a network Because of the above reasons, subnetting was introduced. IPv4 Subnet Masks Subnetting depends on the use of subnet masks to identify how a network is subdivided. A subnet mask indicates where network information is located in an IPv4 address. The 1 in a subnet mask indicates that corresponding bits in the IPv4 address contain network information (likewise 0 indicates the opposite) Each network class is associated with a default subnet mask… Class A = 255.0.0.0 Class B = 255.255.0.0 Class C = 255.255.255.0 An example of calculating  the network ID for a particular device with a subnet mask is shown below.. IP Address = 199.34.89.127 Subnet Mask = 255.255.255.0 Resultant Network ID = 199.34.89.0 IPv4 Subnetting Techniques Subnetting breaks the rules of classful IPv4 addressing. Read page 490 for a detailed explanation Calculating IPv4 Subnets Read page 491 – 494 for an explanation Important… Subnetting only applies to the devices internal to your network. Everything external looks at the class of the IP address instead of the subnet network ID. This way, traffic directed to your network externally still knows where to go, and once it has entered your internal network it can then be prioritized and segmented. CIDR (classless Interdomain Routing) CIDR is also known as classless routing or supernetting. In CIDR conventional network class distinctions do not exist, a subnet boundary can move to the left, therefore generating more usable IP addresses on your network. A subnet created by moving the subnet boundary to the left is known as a supernet. With CIDR also came new shorthand for denoting the position of subnet boundaries known as CIDR notation or slash notation. CIDR notation takes the form of the network ID followed by a forward slash (/) followed by the number of bits that are used for the extended network prefix. To take advantage of classless routing, your networks routers must be able to interpret IP addresses that don;t adhere to conventional network class parameters. Routers that rely on older routing protocols (i.e. RIP) are not capable of interpreting classless IP addresses. Internet Gateways Gateways are a combination of software and hardware that enable two different network segments to exchange data. A gateway facilitates communication between different networks or subnets. Because on device cannot send data directly to a device on another subnet, a gateway must intercede and hand off the information. Every device on a TCP/IP based network has a default gateway (a gateway that first interprets its outbound requests to other subnets, and then interprets its inbound requests from other subnets). The internet contains a vast number of routers and gateways. If each gateway had to track addressing information for every other gateway on the Internet, it would be overtaxed. Instead, each handles only a relatively small amount of addressing information, which it uses to forward data to another gateway that knows more about the data’s destination. The gateways that make up the internet backbone are called core gateways. Address Translation An organizations default gateway can also be used to “hide” the organizations internal IP addresses and keep them from being recognized on a public network. A public network is one that any user may access with little or no restrictions. On private networks, hiding IP addresses allows network managers more flexibility in assigning addresses. Clients behind a gateway may use any IP addressing scheme, regardless of whether it is recognized as legitimate by the Internet authorities but as soon as those devices need to go on the internet, they must have legitimate IP addresses to exchange data. When a clients transmission reaches the default gateway, the gateway opens the IP datagram and replaces the client’s private IP address with an Internet recognized IP address. This process is known as NAT (Network Address Translation). TCP/IP Mail Services All Internet mail services rely on the same principles of mail delivery, storage, and pickup, though they may use different types of software to accomplish these functions. Email servers and clients communicate through special TCP/IP application layer protocols. These protocols, all of which operate on a variety of operating systems are discussed below… SMTP (Simple Mail transfer Protocol) The protocol responsible for moving messages from one mail server to another over TCP/IP based networks. SMTP belongs to the application layer of the ODI model and relies on TCP as its transport protocol. Operates from port 25 on the SMTP server Simple sub-protocol, incapable of doing anything more than transporting mail or holding it in a queue MIME (Multipurpose Internet Mail Extensions) The standard message format specified by SMTP allows for lines that contain no more than 1000 ascii characters meaning if you relied solely on SMTP you would have very short messages and nothing like pictures included in an email. MIME us a standard for encoding and interpreting binary files, images, video, and non-ascii character sets within an email message. MIME identifies each element of a mail message according to content type. MIME does not replace SMTP but works in conjunction with it. Most modern email clients and servers support MIME POP (Post Office Protocol) POP is an application layer protocol used to retrieve messages from a mail server POP3 relies on TCP and operates over port 110 With POP3 mail is delivered and stored on a mail server until it is downloaded by a user Disadvantage of POP3 is that it typically does not allow users to save their messages on the server because of this IMAP is sometimes used IMAP (Internet Message Access Protocol) IMAP is a retrieval protocol that was developed as a more sophisticated alternative to POP3 The single biggest advantage IMAP4 has over POP3 is that users can store messages on the mail server, rather than having to continually download them Users can retrieve all or only a portion of any mail message Users can review their messages and delete them while the messages remain on the server Users can create sophisticated methods of organizing messages on the server Users can share a mailbox in a central location Disadvantages of IMAP are typically related to the fact that it requires more storage space on the server. Additional TCP/IP Utilities Nearly all TCP/IP utilities can be accessed from the command prompt on any type of server or client running TCP/IP. The syntaxt may differ depending on the OS of the client. Below is a list of additional TCP/IP utilities – research their use on your own! Ipconfig (Windows) & Ifconfig (Linux) Netstat Nbtstat Hostname, Host & Nslookup Dig (Linux) Whois (Linux) Traceroute (Tracert) Mtr (my traceroute) Route

    Read the article

  • Organization &amp; Architecture UNISA Studies &ndash; Chap 4

    - by MarkPearl
    Learning Outcomes Explain the characteristics of memory systems Describe the memory hierarchy Discuss cache memory principles Discuss issues relevant to cache design Describe the cache organization of the Pentium Computer Memory Systems There are key characteristics of memory… Location – internal or external Capacity – expressed in terms of bytes Unit of Transfer – the number of bits read out of or written into memory at a time Access Method – sequential, direct, random or associative From a users perspective the two most important characteristics of memory are… Capacity Performance – access time, memory cycle time, transfer rate The trade off for memory happens along three axis… Faster access time, greater cost per bit Greater capacity, smaller cost per bit Greater capacity, slower access time This leads to people using a tiered approach in their use of memory   As one goes down the hierarchy, the following occurs… Decreasing cost per bit Increasing capacity Increasing access time Decreasing frequency of access of the memory by the processor The use of two levels of memory to reduce average access time works in principle, but only if conditions 1 to 4 apply. A variety of technologies exist that allow us to accomplish this. Thus it is possible to organize data across the hierarchy such that the percentage of accesses to each successively lower level is substantially less than that of the level above. A portion of main memory can be used as a buffer to hold data temporarily that is to be read out to disk. This is sometimes referred to as a disk cache and improves performance in two ways… Disk writes are clustered. Instead of many small transfers of data, we have a few large transfers of data. This improves disk performance and minimizes processor involvement. Some data designed for write-out may be referenced by a program before the next dump to disk. In that case the data is retrieved rapidly from the software cache rather than slowly from disk. Cache Memory Principles Cache memory is substantially faster than main memory. A caching system works as follows.. When a processor attempts to read a word of memory, a check is made to see if this in in cache memory… If it is, the data is supplied, If it is not in the cache, a block of main memory, consisting of a fixed number of words is loaded to the cache. Because of the phenomenon of locality of references, when a block of data is fetched into the cache, it is likely that there will be future references to that same memory location or to other words in the block. Elements of Cache Design While there are a large number of cache implementations, there are a few basic design elements that serve to classify and differentiate cache architectures… Cache Addresses Cache Size Mapping Function Replacement Algorithm Write Policy Line Size Number of Caches Cache Addresses Almost all non-embedded processors support virtual memory. Virtual memory in essence allows a program to address memory from a logical point of view without needing to worry about the amount of physical memory available. When virtual addresses are used the designer may choose to place the cache between the MMU (memory management unit) and the processor or between the MMU and main memory. The disadvantage of virtual memory is that most virtual memory systems supply each application with the same virtual memory address space (each application sees virtual memory starting at memory address 0), which means the cache memory must be completely flushed with each application context switch or extra bits must be added to each line of the cache to identify which virtual address space the address refers to. Cache Size We would like the size of the cache to be small enough so that the overall average cost per bit is close to that of main memory alone and large enough so that the overall average access time is close to that of the cache alone. Also, larger caches are slightly slower than smaller ones. Mapping Function Because there are fewer cache lines than main memory blocks, an algorithm is needed for mapping main memory blocks into cache lines. The choice of mapping function dictates how the cache is organized. Three techniques can be used… Direct – simplest technique, maps each block of main memory into only one possible cache line Associative – Each main memory block to be loaded into any line of the cache Set Associative – exhibits the strengths of both the direct and associative approaches while reducing their disadvantages For detailed explanations of each approach – read the text book (page 148 – 154) Replacement Algorithm For associative and set associating mapping a replacement algorithm is needed to determine which of the existing blocks in the cache must be replaced by a new block. There are four common approaches… LRU (Least recently used) FIFO (First in first out) LFU (Least frequently used) Random selection Write Policy When a block resident in the cache is to be replaced, there are two cases to consider If no writes to that block have happened in the cache – discard it If a write has occurred, a process needs to be initiated where the changes in the cache are propagated back to the main memory. There are several approaches to achieve this including… Write Through – all writes to the cache are done to the main memory as well at the point of the change Write Back – when a block is replaced, all dirty bits are written back to main memory The problem is complicated when we have multiple caches, there are techniques to accommodate for this but I have not summarized them. Line Size When a block of data is retrieved and placed in the cache, not only the desired word but also some number of adjacent words are retrieved. As the block size increases from very small to larger sizes, the hit ratio will at first increase because of the principle of locality, which states that the data in the vicinity of a referenced word are likely to be referenced in the near future. As the block size increases, more useful data are brought into cache. The hit ratio will begin to decrease as the block becomes even bigger and the probability of using the newly fetched information becomes less than the probability of using the newly fetched information that has to be replaced. Two specific effects come into play… Larger blocks reduce the number of blocks that fit into a cache. Because each block fetch overwrites older cache contents, a small number of blocks results in data being overwritten shortly after they are fetched. As a block becomes larger, each additional word is farther from the requested word and therefore less likely to be needed in the near future. The relationship between block size and hit ratio is complex, and no set approach is judged to be the best in all circumstances.   Pentium 4 and ARM cache organizations The processor core consists of four major components: Fetch/decode unit – fetches program instruction in order from the L2 cache, decodes these into a series of micro-operations, and stores the results in the L2 instruction cache Out-of-order execution logic – Schedules execution of the micro-operations subject to data dependencies and resource availability – thus micro-operations may be scheduled for execution in a different order than they were fetched from the instruction stream. As time permits, this unit schedules speculative execution of micro-operations that may be required in the future Execution units – These units execute micro-operations, fetching the required data from the L1 data cache and temporarily storing results in registers Memory subsystem – This unit includes the L2 and L3 caches and the system bus, which is used to access main memory when the L1 and L2 caches have a cache miss and to access the system I/O resources

    Read the article

  • Architecture for Qt SIGNAL with subclass-specific, templated argument type

    - by Barry Wark
    I am developing a scientific data acquisition application using Qt. Since I'm not a deep expert in Qt, I'd like some architecture advise from the community on the following problem: The application supports several hardware acquisition interfaces but I would like to provide an common API on top of those interfaces. Each interface has a sample data type and a units for its data. So I'm representing a vector of samples from each device as a std::vector of Boost.Units quantities (i.e. std::vector<boost::units::quantity<unit,sample_type> >). I'd like to use a multi-cast style architecture, where each data source broadcasts newly received data to 1 or more interested parties. Qt's Signal/Slot mechanism is an obvious fit for this style. So, I'd like each data source to emit a signal like typedef std::vector<boost::units::quantity<unit,sample_type> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); for the unit and sample_type appropriate for that device. Since tempalted QObject subclasses aren't supported by the meta-object compiler, there doesn't seem to be a way to have a (tempalted) base class for all data sources which defines the samplesAcquired Signal. In other words, the following won't work: template<T,U> //sample type and units class DataSource : public QObject { Q_OBJECT ... public: typedef std::vector<boost::units::quantity<U,T> > SampleVector signals: void samplesAcquired(SampleVector sampleVector); }; The best option I've been able to come up with is a two-layered approach: template<T,U> //sample type and units class IAcquiredSamples { public: typedef std::vector<boost::units::quantity<U,T> > SampleVector virtual shared_ptr<SampleVector> acquiredData(TimeStamp ts, unsigned long nsamples); }; class DataSource : public QObject { ... signals: void samplesAcquired(TimeStamp ts, unsigned long nsamples); }; The samplesAcquired signal now gives a timestamp and number of samples for the acquisition and clients must use the IAcquiredSamples API to retrieve those samples. Obviously data sources must subclass both DataSource and IAcquiredSamples. The disadvantage of this approach appears to be a loss of simplicity in the API... it would be much nicer if clients could get the acquired samples in the Slot connected. Being able to use Qt's queued connections would also make threading issues easier instead of having to manage them in the acquiredData method within each subclass. One other possibility, is to use a QVariant argument. This necessarily puts the onus on subclass to register their particular sample vector type with Q_REGISTER_METATYPE/qRegisterMetaType. Not really a big deal. Clients of the base class however, will have no way of knowing what type the QVariant value type is, unless a tag struct is also passed with the signal. I consider this solution at least as convoluted as the one above, as it forces clients of the abstract base class API to deal with some of the gnarlier aspects of type system. So, is there a way to achieve the templated signal parameter? Is there a better architecture than the one I've proposed?

    Read the article

< Previous Page | 5 6 7 8 9 10  | Next Page >