Search Results

Search found 2499 results on 100 pages for 'meta'.

Page 51/100 | < Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >

  • Should I cache the data or hit the database?

    - by JD01
    I have not worked with any caching mechanisms and was wondering what my options are in the .net world for the following scenario. We basically have a a REST Service where the user passes an ID of a Category (think folder) and this category may have lots of sub categories and each of the sub categories could have 1000 of media containers (think file reference objects) which contain information about a file that may be on a NAS or SAN server (files are videos in this case). The relationship between these categories is stored in a database together with some permission rules and meta data about the sub categories. So from a UI perspective we have a lazy loaded tree control which is driven by the user by clicking on each sub folder (think of Windows explorer). Once they come to a URL of the video file, they then can watch the video. The number of users could grow into the 1000s and the sub categories and videos could be in the 10000s as the system grows. The question is should we carry on the way it is currently working where each request hits the database or should we think about caching the data? We are on using IIS 6/7 and Asp.net.

    Read the article

  • ADF Essentials - Available for free and certified on GlassFish!

    - by delabassee
    If you are an Oracle customer, you are probably familiar with Oracle ADF (Application Development Framework). If you are not, ADF is, in a nutshell, a Java EE based framework that simplifies the development of enterprise applications. It is the development framework that was used, among other things, to build Oracle Fusion Applications. Oracle has just released ADF Essentials, a free to develop and deploy version of Oracle ADF's core technologies. As a good news never come alone, GlassFish 3.1.2 is now a certified container for ADF Essentials! ADF Essentials leverage core ADF features and includes: Oracle ADF Faces - a set of more than 150 JSF 2.0 rich components that simplify the creation of rich Web user interfaces (charting, data vizualization, advanced tables, drag and drop, touch gesture support, extensive windowing capabilities, etc.) Oracle ADF Controller - an extension of the JSF controller that helps build reusable process flows and provides the ability to create dynamic regions within Web pages. Oracle ADF Binding - an XML-based, meta-data abstraction layer to connect user interfaces to business services. Oracle ADF Business Components – a declaratively-configured layer that simplifies developing business services against relational databases by providing reusable components that implement common design patterns. ADF is a highly declarative framework, it has always had a very good tooling support. Visual development for Oracle ADF Essentials is provided in Oracle JDeveloper 11.1.2.3. Eclispe support is planned for a later OEPE (Oracle Enterprise Pack for Eclipse) release. Here are some relevant links to quickly learn on how to use ADF Essentials on GlassFish: Video : Oracle ADF Essentials Overview and Demo Deploying Oracle ADF Essentials Applications to Glassfish OTN : Oracle ADF Essentials Ressources

    Read the article

  • How can Highscores be more meaningful and engaging?

    - by Anselm Eickhoff
    I'm developing a casual Android game in which the player's success can very easily be represented by a number (I'm not more specific because I'm interested in the topic in general). Although I myself am not a highscore person at all, I was thinking of implementing a highscore for that game, but I see at least 2 problems in the classical leaderboard approach: very soon the highscore will be dominated by hardcore players, leaving no chance for beginners, who are then frustrated. This is very severe especially in casual games. there is no direct reward for being a loyal player who plays the game over and over again My current idea is to "reset" the highscore every 24 hours (for example) and each day nominate the "player of the day" who then gets a "star". Then there would be some kind of meta-highscore of players with the most stars. That way even beginners might have a chance to be "player of the day" once and continued or repeated play is rewarded much more. The idea is still very rough and there are many problems in the details and the technical implementation but I have a feeling it is a step in the right direction. Do you have creative and new ideas on how to implement highscores? Which games are doing this well / what types of highscores do you find most engaging?

    Read the article

  • How can I avoid a 302 for Fetch as Bot?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Supporting and testing multiple versions of a software library in a Maven project

    - by Duncan Jones
    My company has several versions of its software in use by our customers at any one time. My job is to write bespoke Java software for the customers based on the version of software they happen to be running. I've created a Java library that performs many of the tasks I regularly require in a normal project. This is a Maven project that I deploy to our local Artifactory and pull down into other Maven projects when required. I can't decide the best way to support the range of software versions used by our customers. Typically, we have about three versions in use at any one time. They are normally backwards compatible with one another, but that cannot be guaranteed. I have considered the following options for managing this issue: Separate editions for each library version I make a separate release of my library for each version of my company software. Using some Maven cunningness I could automatically produce a tested version linked to each of the then-current company software versions. This is feasible, but not without its technical challenges. The advantage is that this would be fairly automatic and my unit tests have definitely executed against the correct software version. However, I would have to keep updating the versions supported and may end up maintaining a large collection of libraries. One supported version, but others tested I support the oldest software version and make a release against that. I then perform tests with the newer software versions to ensure it still works. I could try and make this testing automatic by having some non-deployed Maven projects that import the software library, the associated test JAR and override the company software version used. If those projects build, then the library is compatible. I could ensure these meta-projects are included in our CI server builds. I welcome comments on which approach is better or a suggestion for a different approach entirely. I'm leaning towards the second option.

    Read the article

  • Code Clone Analysis on Rawr &ndash; Part 1

    - by Dylan Smith
    In this post we’ll take a look at the first result from the Code Clone Analysis, and do some refactoring to eliminate the duplication.  The first result indicated that it found an exact match repeated 14 times across the solution, with 18 lines of duplicated code in each of the 14 blocks.   Net Lines Of Code Deleted: 179     In this case the code in question was a bunch of classes representing the various Bosses.  Every Boss class has a constructor that initializes a whole bunch of properties of that boss, however, for most bosses a lot of these are simply set to 0’s.     Every Boss class inherits from the class MultiDiffBoss, so I simply moved all the initialization of the various properties to the base class constructor, and left it up to the Boss subclasses to only set those that are different than the default values. In this case there are actually 22 Boss subclasses, however, due to some inconsistencies in the code structure Code Clone only identified 14 of them as identical blocks.  Since I was in there refactoring the 14 identified already, it was pretty straightforward to identify the other 8 subclasses that had the same duplicated behavior and refactor those also.   Note: Code Clone Analysis is pretty slow right now.  It takes approx 1 min to build this solution, but it takes 9 mins to run Code Clone Analysis.  Personally, if the results are high quality I’m OK with it taking a long time to run since I don’t expect it’s something I would be running all that often.  However, it would be nice to be able to run it as part of a nightly build, but at this time I don’t believe it’s possible to run outside of Visual Studio due to a dependency on the meta-data available in the VS environment.

    Read the article

  • Disallow robots.txt from being accessed in a browser but still accessible by spiders?

    - by Michael Irigoyen
    We make use of the robots.txt file to prevent Google (and other search spiders) from crawling certain pages/directories in our domain. Some of these directories/files are secret, meaning they aren't linked (except perhaps on other pages encompassed by the robots.txt file). Some of these directories/files aren't secret, we just don't want them indexed. If somebody browses directly to www.mydomain.com/robots.txt, they can see the contents of the robots.txt file. From a security standpoint, this is not something we want publicly available to anybody. Any directories that contain secure information are set behind authentication, but we still don't want them to be discoverable unless the user specifically knows about them. Is there a way to provide a robots.txt file but to have it's presence masked by John Doe accessing it from his browser? Perhaps by using PHP to generate the document based on certain criteria? Perhaps something I'm not thinking of? We'd prefer a way to centrally do it (meaning a <meta> tag solution is less than ideal).

    Read the article

  • Is a 302 redirect to a random URL from the homepage an SEO problem?

    - by CookieMonster
    I originally posted this on Stackoverflow, but I believe here is a better place to ask. My web application is very similar to notepad.cc which redirects to a randomly generated URL upon access, e.g. http://myapp.com/roTr94h4Gd. (Please note that notepad.cc is not my site.) Probably because of this redirect feature, when I do "fetch as Google" or "fetch as Bingbot", I get a 302 and no html content. Not even a <html></html> tag. HTTP/1.1 302 Moved Temporarily Server: nginx/1.4.1 Date: Tue, 01 Oct 2013 04:37:37 GMT Content-Type: text/html Transfer-Encoding: chunked Connection: keep-alive X-Powered-By: PHP/5.4.17-1~dotdeb.1 Set-Cookie: PHPSESSID=vp99q5e5t5810e3bnnnvi6sfo2; expires=Thu, 03-Oct-2013 04:37:37 GMT; path=/ Expires: Thu, 19 Nov 1981 08:52:00 GMT Cache-Control: no-store, no-cache, must-revalidate, post-check=0, pre-check=0 Pragma: no-cache Location: /roTr94h4Gd How should I avoid 302 in this case? I suppose I could modify my site to prevent the redirect, but it is a necessary feature of my web app to generate a random URL on each access. I added <meta name="fragment" content="!"> tag into my index page and set it to return a static snapshot of my page when the flag is set. But this still returns a 302. I also added a header to return 200 before redirecting, but this had no effect, either. Could someone tell me a good suggestion to solve this problem?

    Read the article

  • Is it possible to use a VB master page to cover an entirely separate directory written in C#?

    - by Jason Weber
    I have a company website written in vb.net. There are 5 master pages. I recently began utilizing a forum application, also asp.net 4.0, but this one is written in C#. My forum directory is domain.com/knowledgebase/. Is there any possible way to take one of my vb.net master pages and somehow integrate into the /knowledgebase/ directory? Here's what's currently This is what's in the top of every page in my site: <%@ Page Title="USS Vision Inc." Language="VB" MasterPageFile="~/homepage.master" AutoEventWireup="false" CodeFile="default.aspx.vb" Inherits="_default" culture="auto" meta:resourcekey="PageResource1" uiculture="auto" Debug="true" %> This is what's in my /knowledgebase/ directory: <%@ Page Language="C#" AutoEventWireup="true" ValidateRequest="false" Inherits="YAF.ForumPageBase" culture="auto" uiculture="auto" %> <%@ Register TagPrefix="YAF" Assembly="YAF" Namespace="YAF" %> <script runat="server"> Is it somehow possible to use, for instance, homepage.master in the /knowledgebase/ directory? If so, how would I accomplish this? Thanks for any guidance anybody can offer!

    Read the article

  • I can't program because the code I am using uses old coding styles. Is this normal to programmers? [closed]

    - by Renato Dinhani Conceição
    I'm in my first real job as programmer, but I can't solve any problems because of the coding style used. The code here: Does not have comments Does not have functions (50, 100, 200, 300 or more lines executed in sequence) Uses a lot of if statements with a lot of paths Has variables that make no sense (eg.: cf_cfop, CF_Natop, lnom, r_procod) Uses an old language (Visual FoxPro 8 from 2002), but there are new releases from 2007. I feel like I have gone back to 1970. Is it normal for a programmer familiar with OOP, clean-code, design patterns, etc. to have trouble with coding in this old-fashion way? EDIT: All the answers are very good. For my (un)hope, appears that there are a lot of this kind of code bases around the world. A point mentioned to all answers is refactor the code. Yeah, I really like to do it. In my personal project, I always do this, but... I can't refactor the code. Programmers are only allowed to change the files in the task that they are designed for. Every change in old code must be keep commented in the code (even with Subversion as version control), plus meta informations (date, programmer, task) related to that change (this became a mess, there are code with 3 used lines and 50 old lines commented). I'm thinking that is not only a code problem, but a management of software development problem.

    Read the article

  • is there a checklist that a small website should

    - by Mecon
    I am not a web developer - this would be my first foray. I can do HTML/CSS/Javascript, but never created a website for a company. If anybody is creating a site for small company (expecting some 10-15 static pages), what kinda things would it need to have? I am thinking of the following: Make the eventual owner buy in his/her name: Domain name, Web Hosting package and Email package. Q - DO Web Developers generally ask their clients to buy this stuff and then ask them to share their passwords? OR - Do Web Developers ship the source files to clients so that they can upload it? Create Cross Browser compatible HTML+CSS+javascript pages Add SEO stuff like Meta tags and xml file for crawler Buy professional images from stock website Q - IS there is a best-practice for this step? Add Copyright stuff. Q - ANY idea about how to do this? Add Faceboook widgets, so people can 'like' my website. Register website somewhere so that its searchable from multiple search-engine/yellopages. Q - DOES such a thing exist? Please check my checklist :) and let me know what you think could be missing? Thanks!

    Read the article

  • Need Help Hiring a Perfectionist Programmer [closed]

    - by Bryan Hadaway
    I understand my question may be in the gray area, but I'm not able to use the Meta to ask if this question is appropriate or not so I'll simply have to risk it. My project is complete in the sense that it's a fully functional, ready to go 1.0 version. However, that's not good enough for my standards. My expertise is in HTML/CSS, not jQuery and PHP. I'm looking for someone to refine every character of my code for quality, speed, security and compatibility. I want everything to be as bug free as possible for launch. So I need an expert programmer who's a perfectionist in their coding who cares about the quality of their work (not just making it work) to review and refine my code. I'm sure I can't outright post the project's details and hope for interested parties to contact me as that wouldn't be beneficial to the community so instead I'm looking for advice from programmers about where some of best places to hire quality programmers are and the best strategies to hire the right programmer. In other words, screening applicants off of craigslist isn't going to cut it for this project. Thanks

    Read the article

  • How do I improve terrain rendering batch counts using DirectX?

    - by gamer747
    We have determined that our terrain rendering system needs some work to minimize the number of batches being transferred to the GPU in order to improve performance. I'm looking for suggestions on how best to improve what we're trying to accomplish. We logically split our terrain mesh into smaller grid cells which are 32x32 world units. Each cell has meta data that dictates the four 256x256 textures that are used for spatting along with the alpha blend data, shadow, and light mappings. Each cell contains 81 vertices in a 9x9 grid. Presently, we examine each cell and determine the four textures that are being used to spat the cell. We combine that geometry with any other cell that perhaps uses the same four textures regardless of spat order. If the spat order for a cell differs, the blend map is adjusted so that the spat order is maintained the same as other like cells and blending happens in the right order too. But even with this batching approach, it isn't uncommon when looking out across an area of open terrain to have between 1200-1700 batch count depending upon how frequently textures differ or have different texture blends are between cells. We are only doing frustum culling presently. So using texture spatting, are there other alternatives that can reduce the batch count and allow rendering to be extremely performance-friendly even under DirectX9c? We considered using texture atlases since we're targeting DirectX 9c & older OpenGL platforms but trying to repeat textures using atlases and shaders result in seam artifacts which we haven't been able to eliminate with the exception of disabling mipmapping. Disabling mipmapping results in poor quality textures from a distance. How have others batched together terrain geometry such that one could spat terrain using various textures, minimizing batch count and texture state switches so that rendering performance isn't negatively impacted?

    Read the article

  • Google is displaying "Translate this page" based on a previously registered domain inbound links

    - by crnm
    I recently started a new project with a newly registered generic tld domain. As soon as Google started indexing the page, it displayed a "translate this page" in SERP's, which tries to translate the page to the language of a small Eastern European country from the language that the site actually uses. I tried everything to prevent this: language meta headers and attributes, localisation through Google Webmaster Tools...all to no avail - nothing helped. After a couple of weeks I spotted dozens of inbound links popping up in Google Webmaster Tools all coming from that small Eastern European country, from sub-pages that are not active anymore (either sending out 404's or 301's to the main page), and also had been written in that other language. So the domain had been registered before and as it looks, it did got a lot of possibly spam links in that language. I can't even ask the sites where those links should have been to remove them as they are not active anymore physically, just in Google Webmaster Tools and/or internal data masses... Now I'm at a loss about what to do? As my site is pretty new, it does not have many links pointing towards it in my targeted language. So those are probably not enough to convince Google of attaching the right language to it as Google ignores all other signals about the page language. I'm also unsure if I should use the "disavow" tool, or a reconsideration request...or what else to do about this miserable state. I never used these tools before so I don't have any experience with them. Somehow I have to convince Google about the right language of the page and also to not count/apply/whatever all those historical links from the previous owner. (The domain had been deleted without any traces in Google before I registered it) Has anyone here ever dealt with a similar "Translate this page" problem? (I've also looked at this thread: How can I prevent Google mistakenly offering to translate a page? but didn't find a solution there)

    Read the article

  • Authenticate with Django 1.5

    - by gorjuce
    I'm currently testing django 1.5 and a custom User model, but I've some problems. I've created a User class in my account app, which looks like: class User(AbstractBaseUser): email = models.EmailField() activation_key = models.CharField(max_length=255) is_active = models.BooleanField(default=False) is_admin = models.BooleanField(default=False) USERNAME_FIELD = 'email' I can correctly register a user, who is stored in my account_user table. Now, how can I log in? I've tried with: def login(request): form = AuthenticationForm() if request.method == 'POST': form = AuthenticationForm(request.POST) email = request.POST['username'] password = request.POST['password'] user = authenticate(username=email, password=password) if user is not None: if user.is_active: login(user) else: message = 'disabled account, check validation email' return render( request, 'account-login-failed.html', {'message': message} ) return render(request, 'account-login.html', {'form': form}) I can correctly register a new User My forms.py which contains my register form class RegisterForm(forms.ModelForm): """ a form to create user""" password = forms.CharField( label="Password", widget=forms.PasswordInput() ) password_confirm = forms.CharField( label="Password Repeat", widget=forms.PasswordInput() ) class Meta: model = User exclude = ('last_login', 'activation_key') def clean_password_confirm(self): password = self.cleaned_data.get("password") password_confirm = self.cleaned_data.get("password_confirm") if password and password_confirm and password != password_confirm: raise forms.ValidationError("Password don't math") return password_confirm def clean_email(self): if User.objects.filter(email__iexact=self.cleaned_data.get("email")): raise forms.ValidationError("email already exists") return self.cleaned_data['email'] def save(self): user = super(RegisterForm, self).save(commit=False) user.password = self.cleaned_data['password'] user.activation_key = generate_sha1(user.email) user.save() return user My question is: Why does authenticate give me None? I know I'm trying to authenticate() with an email as username but is that not one of the reasons to use a custom User model?

    Read the article

  • 301 redirect from HTTP to HTTPS - how to be sure Google is fetching the correct information?

    - by user33692
    I'm hoping somebody might be able to provide a bit of advice on an issue I am having. I have one site where we implemented a 301 redirect on the homepage from HTTP to HTTPS. We have links on the homepage to other parts of the site that are not under SSL (in fact there is only one other page under SSL). When I go to our Webmaster Tools account I notice that we are not being provided with any webmaster information (e.g., search queries, backlinks, etc...) related to our homepage under SSL. I performed a Fetch as Google on the homepage and the information it returned is: HTTP/1.1 301 Moved Permanently Date: Fri, 08 Nov 2013 17:26:24 GMT Server: Apache/2.2.16 (Debian) Location: https://mysite.com/ Vary: Accept-Encoding Content-Encoding: gzip Content-Length: 242 Keep-Alive: timeout=15, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>301 Moved Permanently</title> </head><body> <h1>Moved Permanently</h1> <p>The document has moved <a href="https://mysite.com/">here</a>.</p> <hr> <address>Apache/2.2.16 (Debian) Server at mysite.com</address> </body></html> I am worried by the fact that Google fetch is not getting the correct Title tags and Meta information from our homepage and that this is hurting our search results. Additionally, I am worried that we need to do something specific with the sitemap to ensure that Google is correctly indexing all our pages and being able to flow from the HTTPS to the HTTP without issues. Does anybody have any advice on how we can correctly set this up or be sure that Google is fetching the correct information?

    Read the article

  • Choosing the Database Solution for Large Data Application

    - by GµårÐïåñ
    I have been tasked to write an application that will be a combination of document and inventory management in VB.net which will be used to store document images in TIFF, PDF, XPS, TXT, DOC, PPT and so on as binary data that can be retrieved for viewing, printing, and possible OCR to be searchable as well along with meta data such as sender, recipient, type of document, date, source, etc. So the table would probably be something like: DOC_NAME, DOC_DATE, NOTES, ... DOC_BINARY (where the actual document will be put inside) What my concern is finding a database solution that will not become unstable due to size restrictions, records limitations and performance. Some of the options are MS_SQL, SQL Express, SQLite, mySQL, and Access. Now I can pretty much eliminate Access right off the bat as it is just too limiting and not scalable. I can further eliminate SQL Express because of the 2 GB limit and again scalability. So that leaves me with MS_SQL, SQLite and mySQL (although if anyone has other options they think would be good as well, please feel free to share them, by no means am I set on these only). So this brings me to what you guys think is the best option for what I have described. The goal is that the data is all in one place (a single file) that will make backup and portability easier. For small volume usage, pretty much any solution will hold for a while, but my goal is to think ahead and make sure its able to withstand heavy large volume usage as well. Another consideration is also the interoperability with .NET and stability of such code to avoid errors and memory leaks. Your feedback would be greatly appreciated.

    Read the article

  • Site experiencing low traffic volume between 8AM and 4PM BST

    - by BizNuge
    There may be no definitive answer to this question but I thought peer review of the problem might stimulate some ideas on the topic. We have a boutique sales site that is experiencing low volumes of traffic (both UK and international) between 8AM and 4PM BST. This seems sort of strange since our target audience for the site is UK based, and this would seem to be when people are awake and online. We are in contact with another boutique site in the same sector who don't experience this issue, so it seems kinda strange. Later on in the day we are getting traffic from the UK, as well as a fair amount of international traffic, so I'm at a loss to figure this one out. The site is fairly well optimised including:- sitemap.xml Proper caching policies across the board google merchant dublin core microdata html5 pretty urls meta and content are reviewed as an ongoing concern we have decent sitelinks for direct queries thru google on the site name a decent amount of inbound links FB, Twitter, Google +1 Google maps listing [verified] site has been selling for ~4 months and is getting ~250 users per day. So I'm not entirely sure how to explain the mid day dip in our figures.... Any ideas at all would be useful. Cheers all!

    Read the article

  • Enable [command] key to register as something other than just [ctrl]?

    - by gojomo
    I'm running 10.04LTS inside VMWare Fusion on a Mac. The [command] key (aka [windows] on many keyboards) is almost always behaving as if it was [ctrl], even though I done anything explicit to request that behavior. In fact, in SystemPreferencesKeyboardLayoutsOptionsAlt/Win key behavior, 'default' is chosen (rather than the 'Control is mapped to Win keys' option). However, choosing other options there do not seem to change the handling of [command], at least not as tested in the SystemPreferenceKeyboard Shortcuts app. (No matter what I've tried, [command]-x is always detected as [Ctrl]-x in that app.) I've tried: various options under SystemPreferencesKeyboardLayoutsOptionsAlt/Win key behavior toggling the VMWare Fusion Preferences KKeyboard & Mouse Key Mappings setup which claims to map '[command]' to '[windows]', and restarting the VM in each position the xmodmap lines suggested at https://help.ubuntu.com/community/MappingWindowsKey And yet, it's clear that all Ubuntu apps aren't merging [ctrl] and [command], because in 'Terminal', [shift]-[ctrl]-c will Copy, but [shift]-[command]-c will not. If the [command]/[windows] key was recognized as anything else ('Super', 'Meta', 'Hyper'? I don't care as long as it's not 'Control'), then I could achieve my real goal (which happens to be enabling CMD-based cut/copy/paste in PyCharm, while leaving CTRL-X/etc available for emacs-like bindings). I think any solution which manages to make [command]-x appear as something other than [ctrl]-x in PreferencesKeyboard Shortcuts will probably do the trick.

    Read the article

  • The Freemium-Premium Puzzle

    The more time I spend thinking about the value of information, the more I found that digitalizing information significantly changed the 'information markets', potentially in an irreversible manner. The graph at the bottom outlines my current view. The existing business models tend to be the same in the digital and analogue information world, i.e. revenue is derived from a combination of consumers' payments and advertisement. Even monetizing 'meta-information' such as search engines isn't new. Just think of the once popular 'Who'sWho'. What really changed is the price-value ratio. The curve is pushed down, closer to the axis. You pay less for the same, or often even get more for less. If you recall the capabilities I described in relevance of information you will see that there are many additional features available for digital content compared to analogue content. I think this is a good 'blue ocean strategy' by combining existing capabilities in a new way. (Kim W.C. & Mauborgne, R. (2005) Blue Ocean Strategies. Boston: Harvard Business School Publishing.). In addition the different channels of digital information distribution significantly change the value of information. I will touch on this in one of my next blogs. Right now, many information providers started to offer 'freemium' content through digital channels, hoping to get a premium for the 'full' content. No freemium seems to take them out of business, because they are apparently no longer visible in today's most relevant channels of information consumption. But, the more freemium is provided, the lower the premium gets; a truly puzzling situation. To make it worse, channel providers increasingly regard information as a value adding and differentiating activity. Maybe new types of exclusive, strategic alliances will solve the puzzle, introducing new types of 'gate-keepers', which - to me - somehow does not match the spirit of the WWW and the generation Y's perception of information consumption and exchange.

    Read the article

  • Why are Win7 IE browers running in IE7 mode?

    - by Skurpi
    On our site we currently have approx. 50% of our IE7 users running Windows 7. However, Win7 was released with IE8 meaning that something is not right. Based on this, we made the assumption that the modern IE browsers enter IE7 compatibility mode on our site. So we placed the "X-UA-Compatible=edge" meta tag on our site in hope to kill off IE7 support. However, after a week, the percentage of IE7 (and IE8) users have gone up for the first time since the start of this year. When looking at the X-UA-Compatible documentation, it is meant to be used for putting your site into an older mode. Is there any other way to make sure that IE doesn't enter compatibility mode? Is there something else we need to look for? And yes, we have so many users on our site that we can't kill off IE7 until it goes down under 2% of unique visitors. EDIT: We use <!doctype html> EDIT2: We haven't been able to reproduce the compatibility mode in our environments either

    Read the article

  • Domain model integration using JSON capable DTOs

    - by g-makulik
    I'm a bit confused about architectural choices for the java/web-applications world. The background is I have a system with certain hardware components (that introduce system immanent active behavior) and a configuration database for system meta and HW-components configuration data (these are even usually self contained, since the HW-components persist configuration data anyway). For realization of the configuration/status data exchange protocol with the HW-components we have chosen the Google Protobuf format, which works well for the directly wired communication with these components. Now we want to develop an abstract model (domain model) for those HW-components and I have the feeling that a plain Java class model would fit best for this (c++ implementation seems to have too much implementation/integration overhead with viable language-bridge interfaces). Google Protobuf message definitions could still serve well to describe DTO objects used to interact with a domain model API. But integrating Google Protobuf messages client side for e.g. data binding in the current view doesn't seem to be a good choice. I'm thinking about some extra serialization features, e.g. for JSON based data exchange with the views/controllers. Most lightweight solutions seem to involve a python based presentation layer using JSON based data transfer (I'm at least not sure to be fully informed about this). Is there some lightweight (applicable for a limited ARM Linux platform) framework available, supporting such architecture to realize a web-application?

    Read the article

  • Contexts and Dependency Injection(CDI)??

    - by Masa Sasaki
    WebLogic Server?????????????WebLogic Server????????6?20?????????37?WebLogic Server???@????????Contexts and Dependency Injection(CDI)?(?????????? Fusion Middleware?????? ?? ?)?????????????????Java EE 6????????CDI???????DI(Dependency Injection)?Java EE 5????????????????????CDI??DI????????????????????????????????????????????????????????CDI????????????????????(?????? Fusion Middleware?????? ??? ??) CDI?? ???????CDI???Java EE 6???????JSR299: Contexts Dependency Injection????? ?????Dependency Injection (??????)?Aspect-Oriented Programming (AOP)?Interception ??????????????????????????? CDI?????????????????????????????????????????????? ?????????????????????????????????????????? ?????????????????????????????????????????????? ???????????????????????????CDI?????????????? CDI?????????????2? ??1???CDI??????????Oracle WebLogic Server 12c????Java EE 6????????? ?????????????????2???beans.xml???????Web??????????WEB-INF/beans.xml? EJB??META-INF/beans.xml????????????CDI????????????????beans.xml???? ?????????????????????? Java EE 5?DI(Dependency Injection) Java EE 5??DI????????????????????????????????????? ?????????????????????????????????????????(@EJB? @Resource?@WebServiceRef)??? Java EE 6?CDI Java EE 6?CDI?????????????????@Inject???????????????? ???????????????????????????????????????????????????? @Qualifier????????????????? ?????????????????????????????????????????@Qualifier? ????????????·??????????????????????@JPN??????????? @Produce???????? ???????????????????? ????????·?????????????? CDI?????????????????????????????????·??????? ????????????????????????????????????????????? ???????????????? EL(Expression Language) ???????? EL????????????JSF?ManagedBean?????????????·?????????????? ??? Java EE 6?????????CDI???????????Java EE 5?DI????AOP??? ???????????????????DI, AOP???????????????? ?????????????CDI?????????????????????????????? ?????CDI?????????????????????????????????? ?????? WebLogic Server??? WebLogic Server?????????WebLogic Server?????! WebLogic Server??????(???????????) WebLogic Server???????? WebLogic Server??????

    Read the article

  • SEO Blog Indexing : Dot Wordpress Versus a Registered Domain?

    - by rumspringa00
    I've used Wordpress for a few of my client's sites, mostly small businesses and ecommerce sites. I have found through Google Analytics as well as the All in One Webmaster plugin that when it comes to social media, using Wordpress is a surefire way of getting your site indexed by Google and occasionally Bing and Yahoo. Since I am a heavy WP user, I'd like to contribute by registering a dot Wordpress domain for my portfolio. When using a WP installation concurrently with a WP domain, e.g. myportfolio.wordpress.com, will the site be more or less likely to be indexed rather a generic myportfolio.com domain? I've seen mixed opinions where people seem to favor a WP domain for URL output where others say that it's a moot point, and that Google will not favor a WP domain over a dot com domain as long as your meta tags are updated and content is keyword optimized. I tend to disagree and believe a WP domian would more likely be indexed and output more URLs over an individual, laconic domain like myportfolio.com. Am I wrong? Thanks in advance!

    Read the article

  • Requiring a specific order of compilaiton

    - by Aber Kled
    When designing a compiled programming language, is it a bad idea to require a specific order of compilation of separate units, according to their dependencies? To illustrate what I mean, consider C. C is the opposite of what I'm suggesting. There are multiple .c files, that can all depend on each other, but all of these separate units can be compiled on their own, in no particular order - only to be linked together into a final executable later. This is mostly due to header files. They enable separate units to share information with each other, and thus the units are able to be compiled independently. If a language were to dispose of header files, and only keep source and object files, then the only option would be to actually include the unit's meta-information in the unit's object file. However, this would mean that if the unit A depends on the unit B, then the unit B would need to be compiled before unit A, so unit A could "import" the unit B's object file, thus obtaining the information required for its compilation. Am I missing something here? Is this really the only way to go about removing header files in compiled languages?

    Read the article

< Previous Page | 47 48 49 50 51 52 53 54 55 56 57 58  | Next Page >