Search Results

Search found 397 results on 16 pages for 'obsolete'.

Page 7/16 | < Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >

  • Should I quit using Ifconfig?

    - by Zhen
    With the servers that mount Infiniband cards, when I use the ifconfig command I have the warning: Ifconfig uses the ioctl access method to get the full address information, which limits hardware addresses to 8 bytes. Because Infiniband address has 20 bytes, only the first 8 bytes are displayed correctly. Ifconfig is obsolete! For replacement check ip. Should I quit using ifconfig? It is deprecated in favor of ip command? Or it will be update in the near future.

    Read the article

  • Is there a specific name for the ".\" (dot-slash) shorthand used to log onto a Windows machine?

    - by HopelessN00b
    I guess the title just about says it all. And yes, .\, not that obsolete \. thing. :) For those who don't know, .\ is a shorthand way of saying "this computer" in Windows at a logon screen, which comes in very handy when you don't know or care about the local computer name but need to authenticate against it anyway, such as through RDP or scripting against a set of shared local users and passwords or even locally, if you're unlucky enough to have to physically go to a machine. Anyway, does this have an actual name, and if so, what is it? I feel kind of stupid saying the dot-slash thing, which is how I've been referring to this.

    Read the article

  • Will I have internet connection issues next day, if I unplug router at night?

    - by headskracher77
    I did this regularly a few years ago and used to feel like I was 'being punished' by the Interent svc provider for disabling access to my computer, because trying to re-connect the next day became a constant pain. I have the same linksys router, a comcast modem, and hi-speed broadband through their LAN. Question: who or what is at fault for lousy internet connections, slow connections, or no connections: (everybody's tech dept. blames everybody else) The router? 10 year olds, maybe obsolete? The modem? came with the service plan - can connect three devices on a sharedconnection. The ISP: I read they not only even control and completely regulate bandwidth usage, but they also ration it!! (true?) So can I safely 'pull the plug' each night for security or not? thnx

    Read the article

  • How to install a proxy LDAP

    - by Jean-Claude
    I have to install an LDAP proxy on a compute cluster frontend. The idea is to avoid the compute nodes to make too many requests on the campus LDAP server. How can I install this to make it work with the school's LDAP? The frontend OS is a RHEL 6.2. I found that I have to install the LDAP server and configure it as a proxy. But all I can find is examples of /etc/openldap/slapd.conf file configuration but after testing different configuration, no results. Furthermore, according to RHEL 6 - Deployment Guide, this config file is obsolete: OpenLDAP no longer reads its configuration from the /etc/openldap/slapd.conf file. Instead, it uses a configuration database located in the /etc/openldap/slapd.d/ directory. Any help is welcomed. Thank you

    Read the article

  • Domain controller in cloud, how do we set up local BDC

    - by brian b
    We have a domain controller (exchange box) hosted at our hosting provider. We need to set up a local domain controller so we do a VPN and local authentication tasks. I can make the PDC accept all connections from our Office IP. How do I get the office router to correctly allow two way communications between the PDC (cloud) and the local DC. Is there a list of ports I need to pass through to the local DC? Thanks! "PDC" and "BDC" used for clarity--I know that the concept is obsolete.

    Read the article

  • How to make googlebot to crawl a page? [closed]

    - by mamadum
    What is the best way of forcing googlebot to crawl the page? I've put up Google analytics, registered site with Google webmaster. I've done great deal of SEO work on the website with keywords and titles, I took care of microdata. I submitted the site anonymously, I successfully fetched the site and submit for indexing couple of days ago and still nothing. Last time googlebot visited the site is almost 1 month ago and the indexed content is now obsolete. Am I missing something? or is it just a slow process??

    Read the article

  • Creating HTML5 Offline Web Applications with ASP.NET

    - by Stephen Walther
    The goal of this blog entry is to describe how you can create HTML5 Offline Web Applications when building ASP.NET web applications. I describe the method that I used to create an offline Web application when building the JavaScript Reference application. You can read about the HTML5 Offline Web Application standard by visiting the following links: Offline Web Applications Firefox Offline Web Applications Safari Offline Web Applications Currently, the HTML5 Offline Web Applications feature works with all modern browsers with one important exception. You can use Offline Web Applications with Firefox, Chrome, and Safari (including iPhone Safari). Unfortunately, however, Internet Explorer does not support Offline Web Applications (not even IE 9). Why Build an HTML5 Offline Web Application? The official reason to build an Offline Web Application is so that you do not need to be connected to the Internet to use it. For example, you can use the JavaScript Reference Application when flying in an airplane, riding a subway, or hiding in a cave in Borneo. The JavaScript Reference Application works great on my iPhone even when I am completely disconnected from any network. The following screenshot shows the JavaScript Reference Application running on my iPhone when airplane mode is enabled (notice the little orange airplane):   Admittedly, it is becoming increasingly difficult to find locations where you can’t get Internet access. A second, and possibly better, reason to create Offline Web Applications is speed. An Offline Web Application must be downloaded only once. After it gets downloaded, all of the files required by your Web application (HTML, CSS, JavaScript, Image) are stored persistently on your computer. Think of Offline Web Applications as providing you with a super browser cache. Normally, when you cache files in a browser, the files are cached on a file-by-file basis. For each HTML, CSS, image, or JavaScript file, you specify how long the file should remain in the cache by setting cache headers. Unlike the normal browser caching mechanism, the HTML5 Offline Web Application cache is used to specify a caching policy for an entire set of files. You use a manifest file to list the files that you want to cache and these files are cached until the manifest is changed. Another advantage of using the HTML5 offline cache is that the HTML5 standard supports several JavaScript events and methods related to the offline cache. For example, you can be notified in your JavaScript code whenever the offline application has been updated. You can use JavaScript methods, such as the ApplicationCache.update() method, to update the cache programmatically. Creating the Manifest File The HTML5 Offline Cache uses a manifest file to determine the files that get cached. Here’s what the manifest file looks like for the JavaScript Reference application: CACHE MANIFEST # v30 Default.aspx # Standard Script Libraries Scripts/jquery-1.4.4.min.js Scripts/jquery-ui-1.8.7.custom.min.js Scripts/jquery.tmpl.min.js Scripts/json2.js # App Scripts App_Scripts/combine.js App_Scripts/combine.debug.js # Content (CSS & images) Content/default.css Content/logo.png Content/ui-lightness/jquery-ui-1.8.7.custom.css Content/ui-lightness/images/ui-bg_glass_65_ffffff_1x400.png Content/ui-lightness/images/ui-bg_glass_100_f6f6f6_1x400.png Content/ui-lightness/images/ui-bg_highlight-soft_100_eeeeee_1x100.png Content/ui-lightness/images/ui-icons_222222_256x240.png Content/ui-lightness/images/ui-bg_glass_100_fdf5ce_1x400.png Content/ui-lightness/images/ui-bg_diagonals-thick_20_666666_40x40.png Content/ui-lightness/images/ui-bg_gloss-wave_35_f6a828_500x100.png Content/ui-lightness/images/ui-icons_ffffff_256x240.png Content/ui-lightness/images/ui-icons_ef8c08_256x240.png Content/browsers/c8.png Content/browsers/es3.png Content/browsers/es5.png Content/browsers/ff3_6.png Content/browsers/ie8.png Content/browsers/ie9.png Content/browsers/sf5.png NETWORK: Services/EntryService.svc http://superexpert.com/resources/JavaScriptReference/ A Cache Manifest file always starts with the line of text Cache Manifest. In the manifest above, all of the CSS, image, and JavaScript files required by the JavaScript Reference application are listed. For example, the Default.aspx ASP.NET page, jQuery library, JQuery UI library, and several images are listed. Notice that you can add comments to a manifest by starting a line with the hash character (#). I use comments in the manifest above to group JavaScript and image files. Finally, notice that there is a NETWORK: section of the manifest. You list any file that you do not want to cache (any file that requires network access) in this section. In the manifest above, the NETWORK: section includes the URL for a WCF Service named EntryService.svc. This service is called to get the JavaScript entries displayed by the JavaScript Reference. There are two important things that you need to be aware of when using a manifest file. First, all relative URLs listed in a manifest are resolved relative to the manifest file. The URLs listed in the manifest above are all resolved relative to the root of the application because the manifest file is located in the application root. Second, whenever you make a change to the manifest file, browsers will download all of the files contained in the manifest (all of them). For example, if you add a new file to the manifest then any browser that supports the Offline Cache standard will detect the change in the manifest and download all of the files listed in the manifest automatically. If you make changes to files in the manifest (for example, modify a JavaScript file) then you need to make a change in the manifest file in order for the new version of the file to be downloaded. The standard way of updating a manifest file is to include a comment with a version number. The manifest above includes a # v30 comment. If you make a change to a file then you need to modify the comment to be # v31 in order for the new file to be downloaded. When Are Updated Files Downloaded? When you make changes to a manifest, the changes are not reflected the very next time you open the offline application in your web browser. Your web browser will download the updated files in the background. This can be very confusing when you are working with JavaScript files. If you make a change to a JavaScript file, and you have cached the application offline, then the changes to the JavaScript file won’t appear when you reload the application. The HTML5 standard includes new JavaScript events and methods that you can use to track changes and make changes to the Application Cache. You can use the ApplicationCache.update() method to initiate an update to the application cache and you can use the ApplicationCache.swapCache() method to switch to the latest version of a cached application. My heartfelt recommendation is that you do not enable your application for offline storage until after you finish writing your application code. Otherwise, debugging the application can become a very confusing experience. Offline Web Applications versus Local Storage Be careful to not confuse the HTML5 Offline Web Application feature and HTML5 Local Storage (aka DOM storage) feature. The JavaScript Reference Application uses both features. HTML5 Local Storage enables you to store key/value pairs persistently. Think of Local Storage as a super cookie. I describe how the JavaScript Reference Application uses Local Storage to store the database of JavaScript entries in a separate blog entry. Offline Web Applications enable you to store static files persistently. Think of Offline Web Applications as a super cache. Creating a Manifest File in an ASP.NET Application A manifest file must be served with the MIME type text/cache-manifest. In order to serve the JavaScript Reference manifest with the proper MIME type, I added two files to the JavaScript Reference Application project: Manifest.txt – This text file contains the actual manifest file. Manifest.ashx – This generic handler sends the Manifest.txt file with the MIME type text/cache-manifest. Here’s the code for the generic handler: using System.Web; namespace JavaScriptReference { public class Manifest : IHttpHandler { public void ProcessRequest(HttpContext context) { context.Response.ContentType = "text/cache-manifest"; context.Response.WriteFile(context.Server.MapPath("Manifest.txt")); } public bool IsReusable { get { return false; } } } } The Default.aspx file contains a reference to the manifest. The opening HTML tag in the Default.aspx file looks like this: <html manifest="Manifest.ashx"> Notice that the HTML tag contains a manifest attribute that points to the Manifest.ashx generic handler. Internet Explorer simply ignores this attribute. Every other modern browser will download the manifest when the Default.aspx page is requested. Seeing the Offline Web Application in Action The experience of using an HTML5 Web Application is different with different browsers. When you first open the JavaScript Reference application with Firefox, you get the following warning: Notice that you are provided with the choice of whether you want to use the application offline or not. Browsers other than Firefox, such as Chrome and Safari, do not provide you with this choice. Chrome and Safari will create an offline cache automatically. If you click the Allow button then Firefox will download all of the files listed in the manifest. You can view the files contained in the Firefox offline application cache by typing about:cache in the Firefox address bar: You can view the actual items being cached by clicking the List Cache Entries link: The Offline Web Application experience is different in the case of Google Chrome. You can view the entries in the offline cache by opening the Developer Tools (hit Shift+CTRL+I), selecting the Storage tab, and selecting Application Cache: Notice that you view the status of the Application Cache. In the screen shot above, the status is UNCACHED which means that the files listed in the manifest have not been downloaded and cached yet. The different possible values for the status are included in the HTML5 Offline Web Application standard: UNCACHED – The Application Cache has not been initialized. IDLE – The Application Cache is not currently being updated. CHECKING – The Application Cache is being fetched and checked for updates. DOWNLOADING – The files in the Application Cache are being updated. UPDATEREADY – There is a new version of the Application. OBSOLETE – The contents of the Application Cache are obsolete. Summary In this blog entry, I provided a description of how you can use the HTML5 Offline Web Application feature in the context of an ASP.NET application. I described how this feature is used with the JavaScript Reference Application to store the entire application on a user’s computer. By taking advantage of this new feature of the HTML5 standard, you can improve the performance of your ASP.NET web applications by requiring users of your web application to download your application once and only once. Furthermore, you can enable users to take advantage of your applications anywhere -- regardless of whether or not they are connected to the Internet.

    Read the article

  • Is Perl still a useful, viable language?

    - by Bob
    I know it may have been asked before, but here goes nothing... Is Perl still something that would be considered useful? If someone was a new programmer (either completely new to programming or just a few month/years of experience) would Perl be something to be considered worthwhile to learn? Is Perl still used with frequency? Is it still popular? Or is Perl dying out compared to languages like Python, Ruby, PHP, ASP, .NET, etc.? Basically it boils down to this: Is it still used/is it still used frequently? If yes, is it dying? If no, will it make a come back? Is it something that would be worth learning? How does it compare in demand to languages like Python in both popularity and usability/viability? Could languages like Python or Ruby be considered replacements for Perl? Also, will newer versions of Perl really bring a large improvement to the Perl community, and perhaps bring Perl back to centerstage compared to other languages? EDIT: Okay, I suppose here's a better, reworded question: Is Perl still growing, or is it "dying"? Is it still a language worth learning and using? What projects does it really "shine" in compared to other languages? What makes Perl a language to choose? Essentially: is Perl growing obsolete compared to other languages, and if so, do you expect that to change, or to continue? And thank you to everyone who has answered so far, the discussion has been really interesting!

    Read the article

  • Does LearnDevNow offers a useful subscription for .NET related training : with video and text reference material?

    - by user766926
    Does LearnDevNow offers a useful subscription for .NET video and text reference training material or to learn .NET? I could not find a review of www.learnnowonline.com/learndevnow I've been doing research about places that offer .NET / ASP.NET / C# / LINQ / SQL / JQuery video training that provide excellent material for developers. Kind of like a Lynda.com for back end development. However, I have not find one place that offers quality material that is easily accessible for the user, and that is priced competitive. This is what I'm looking for: Online Video tutorials with free future updates (videos that can be accessed on any device ie: android portable devices, mac / ipad, linux machines) Printable courseware (ie PDFs so that you can take notes, print if necessary, and read in a tablet in case you don;t have internet access) labs, and easy to access code Pre/Post Assessments/Exams for each training to keep track of your progress and what you have learned/skills. (Appdev used to offer this but it was way too expensive (thousands for each training ie 1k for each. I paid about five thousand at Appdev, and now I regret that purchase), and after a few years/months the training became obsolete - outdated) I looked at learnnowonline.com/learndevnow but it seems that their courseware / text reference material can only be accessed online, and don't know if their videos work in all mobile devices, and browsers (Safari, Chrome, IE, Opera, Firefox) Also, it seems Appdev not longer exist and now is called " AppDev is now LearnNowPlus : www.learnnowonline.com/appdev That site not longer offers prices. I tried to find reviews but could not find any. Does anyone know, or can share a review about some of these type of online training service providers? I would appreciate your feedback on this, and if you can share your past experiences with their service or similar/better services that would even be better. Thanks.

    Read the article

  • Best way to start in C# and Raknet?

    - by cad
    I am trying to learn Raknet from C# and I found it extremely confusing. Raknet tutorial seems to work easy and nice in C++. I have already make some chat server code from tutorial. But I am looking to do something similar in C# and I find a mess. - Seems that I need to compile raknet using SWIG to have like an interface? - Also I have found a project called raknetdotnet but seems abandoned..(http://code.google.com/p/raknetdotnet/) So my main question is what is the best way to code in C# using raknet? As secondary questions: Anyone can recomend me good tutorial in raknet AND c#? Is there any sample C# code that I can download? I have readed lot of pages but I didn't get anything clear so I hope someone that has lived this before can help me. Thanks PD: Maybe raknet is obsolete (I find a lot of code and posts from 2007) and there is a better tool to achieve what I want. (I am interested in making a game with a dedicated server.)

    Read the article

  • Should we always prefer OpenGL ES version 2 over version 1.x

    - by Shivan Dragon
    OpengGL ES version 2 goes a long way into changing the development paradigm that was established with OpenGL ES 1.x. You have shaders which you can chain together to apply varios effects/transforms to your elements, the projection and transformation matrices work completly different etc. I've seen a lot of online tutorials and blogs that simply say "ditch version 1.x, use version 2, that's the way to go". Even on Android's documentation it sais to "use version 2 as it may prove faster than 1.x". Now, I've also read a book on OpenGL ES (which was rather good, but I'm not gonna mention here because I don't want to give the impression that I'm trying to make hidden publicity). The guy there treated only OpenGL ES 1.x for 80% of the book, and then at the end only listed the differences in version 2 and said something like "if OpenGL ES 1 does what you need, there's no need to switch to version 2, as it's only gonna over complicate your code. Version 2 was changed a lot to facillitate newer, fancier stuff, but if you don't need it, version 1.x is fine". My question is then, is the last statement right? Should I always use Open GL ES version 1.x if I don't need version 2 only stuff? I'd sure like to do that, because I find coding in version 1.x A LOT simpler than version 2 but I'm afraid that my apps might get obsolete faster for using an older version.

    Read the article

  • ASP.NET4.0-Compatibility Settings for rendering controls

    - by Jalpesh P. Vadgama
    With asp.net 4.0 Microsoft has taken a great step for rendering controls. Now it will have more cleaner html there are lots of enhancement for rendering html controls in asp.net 4.0 now all controls like Menu, List View and other controls renders more cleaner html. But recently i have faced strange problem in rendering controls I have my site in asp.net 3.5 and i want to convert it in asp.net 4.0. I have applied my style as per 3.5 rendering and some of items are obsolete in asp.net 4.0. Modifying style sheet was a tedious job here asp.net 4.0 compatibility  setting comes into help. Asp.net 4.0 compatibility settings provides full backward compatibility in terms of the rendering controls. You can assign this in your web.config section like following. XML, using GeSHi 1.0.8.6<system.web> <pages controlRenderingCompatibilityVersion="3.5|4.0"/> </system.web>  Parsed in 0.001 seconds at 84.92 KB/s Here the values of controlRenderingCompatibility is a string which will indicate on which way control should render in browser if you provide 4.0 then it will controls with more cleaner html and while if you want to go with old legacy rendering like 3.5 then you can put 3.5 and it will render same way as you are doing in asp.net 3.5. Hope this help you!!! Technorati Tags: ASP.NET 4.0,controlRenderingCompatibility

    Read the article

  • Using old code on new version of Visual Studio [migrated]

    - by Tu Tran
    I have a project which was started from 90s in C/C++. Therefore, it contains many old coding styles such as K&R-style function declaration, obsolete function, ... The project works fine in Visual Studio 2008, but now I want to use it in the new version of Visual Studio (specifically VS 2010) because we have other projects in Visual Studio 2010/2012. I don't want to have too many versions of Visual Studio on my machine. When I try to compile the old project, Visual Studio throws too many errors. I can fix all of them but I am scared to edit the source code and I want other people to be able to pen it in the old version of VS too. I want the project to remain backwards compatible with VS. My question is how to use the old code in Visual Studio 2010/2012 without changing the code. Or if necessary how do I just fix a few lines of code, but make sure it won't cause an error if someone else opens that code in the older version of VS. Is there a way to tell newer Visual Studio versions to use older compiler flags or something like that?

    Read the article

  • What are the disadvantages of automated testing?

    - by jkohlhepp
    There are a number of questions on this site that give plenty of information about the benefits that can be gained from automated testing. But I didn't see anything that represented the other side of the coin: what are the disadvantages? Everything in life is a tradeoff and there are no silver bullets, so surely there must be some valid reasons not to do automated testing. What are they? Here's a few that I've come up with: Requires more initial developer time for a given feature Requires a higher skill level of team members Increase tooling needs (test runners, frameworks, etc.) Complex analysis required when a failed test in encountered - is this test obsolete due to my change or is it telling me I made a mistake? Edit I should say that I am a huge proponent of automated testing, and I'm not looking to be convinced to do it. I'm looking to understand what the disadvantages are so when I go to my company to make a case for it I don't look like I'm throwing around the next imaginary silver bullet. Also, I'm explicity not looking for someone to dispute my examples above. I am taking as true that there must be some disadvantages (everything has trade-offs) and I want to understand what those are.

    Read the article

  • Grid Infrastructure Management Repository (GIMR) database now mandatory in Oracle GI 12.1.0.2

    - by Mike Dietrich
    During the installation of Oracle Grid Infrastructure 12.1.0.1 you've had the following option to choose YES/NO to install the Grid Infrastructure Management Repository (GIMR) database MGMTDB: With Oracle Grid Infrastructure 12.1.0.2 this choice has become obsolete and the above screen does not appear anymore. The GIMR database has become mandatory.  What gets stored in the GIMR? See the documentation here See the changes in Oracle Clusterware 12.1.0.2 here: Automatic Installation of Grid Infrastructure Management Repository The Grid Infrastructure Management Repository is automatically installed with Oracle Grid Infrastructure 12c release 1 (12.1.0.2). The Grid Infrastructure Management Repository enables such features as Cluster Health Monitor, Oracle Database QoS Management, and Rapid Home Provisioning, and provides a historical metric repository that simplifies viewing of past performance and diagnosis of issues. This capability is fully integrated into Oracle Enterprise Manager Cloud Control for seamless management. Furthermore what the doc doesn't say explicitly: The -MGMTDB has now become a single-tenant deployment having a CDB with one PDB This will allow the use of a Utility Cluster that can hold the CDB for a collection of GIMR PDBs When you've had already an Oracle 12.1.0.1 GIMR this database will be destroyed and recreated Preserving the CHM/OS data can be acchieved with OCULMON to dump it out into node view The data files associated with it will be created within the same disk group as OCR and VOTING  In a future release there may be an option offered to put in into a separate disk group Some important MOS Notes: MOS Note 1568402.1FAQ: 12c Grid Infrastructure Management Repository, states there's no supported procedure to enable Management Database once the GI stack is configured MOS Note 1589394.1How to Move GI Management Repository to Different Shared Storage(shows how to delete and recreate the MGMTDB) MOS Note 1631336.1Cannot delete Management Database (MGMTDB) in 12.1 -Mike

    Read the article

  • New META TAGS with positive effects for seo ranking in 2011 and beyond

    - by Sam
    Hi all, im trying to make an up to date chart of meta tags, for all of us, with their purposes, their use and their good (or bad) effects on search engines/being found. Also any body knows new/promising meta tags? I will add yours into my list so this chart is a result of live discussion and up to date. Also, it would be creative to invent your own useful meta, because we are the ones making the web, or aren't we? LEGEND P PURPOSE? What does this meta tag do in 2011, if anything N NECESSARY? Does every site really needs it or not? G GOOD wether it will have a good effect for your site to be found I INVENTED meta tag, who knows it will be accepted in a year! META "METANAME" = PURPOSE? - NECESSARY? - GOOD EFFECT? #### important meta "title" = P consice summary + teaser - N very - G extremely meta "description" = P description + teaser - N yes - G very meta "robots" = P if needed, to skip default dmoz/yahoodir listing - N no - G? #### new & promising! Thanks for input (John, ) meta "original-source" P url of whoever broke the news gets credits - N? - G? meta "syndication-source" P url for syndication of published news - N? - G? meta "canonical" P? - N? - G? #### seems obsolete meta "keywords" = P some keywords - N+G not for google but yahoo likes them meta "language" = P overrule guesswork by defining language - N no - G? meta "page-topic" = P topic/theme - N? - G? meta "abstract" = P short summary - N? - G? meta "copyright" = ? #### invented by me meta "audience" = P filteres audience: "+seniors, +parents, -children, -youth" meta "mood" = P specifies textual style: "discussion, informative, commercial, sexual, fictional, scientific, romantic, therapeutic, technical"

    Read the article

  • Google Fonts API JSON Data in WordPress Options-Framework-Theme

    - by Rob
    I'm developing a child-theme off of the new Twenty Twelve theme using Wordpress 3.4.2 and the development version of the Options Theme Framework by Devin Price. In Devin's tutorial, it shows of a way to implement 15 Google Web Fonts into the Theme Options page, but not all of them (roughly 560). I know I can create a "manual list", like in the tutorial that states each one with fallbacks, but this is time consuming and unproductive as Google may or may not add to, update, change or remove some of these fonts from their list. The list I've created above will ultimately store unavailable fonts the user thinks is there because of what they can see in the drop-down menu and it won't have any new ones - making the list and some selections obsolete. On the Google Developer API Web Fonts page, it talks briefly on retrieving a "dynamic list" using JSON/JavaScript. I was wondering how would I be able to pull the Google Web Fonts API into my Wordpress Theme Options page so I'm not creating my own list or have to constantly release an update to solve this issue. Could someone please walk me through what I would need to paste into my options.php, functions.php, /inc/options-framework.php file etc. or even in a new one to implement this? I've also had a look into some screencasts, plugins and tutorials on how it works, but none of them are specific enough for people just starting out. Please keep in mind I'm not the best coder... Thank you.

    Read the article

  • Good resources for learning Rails?

    - by Bobby Tables
    I just finished working through Peter Cooper's "Beginning Ruby". So now I've got a reasonable grounding in the Ruby language and would like to move onto learning Rails. This question's answers give some good pointers, but I'd like to hear some specific reviews of books and online materials. I generally learn best by working through books with good practical/technical examples AND some passive reading content that breaks up the study between practical and reading sessions (this is what made "Beginning Ruby" great for me), but I'm worried that RoR is evolving fast and that any printed book I order might be obsolete by the time I get it and work through it. Is this a fair worry? Or can anyone recommend a good Rails 3 book that should be up to date at least for the next year or so? Also, I had a brief look at some of the online resources from the other questions, and Rails for Zombies seems to get a lot of praise. Has anyone here actually used it as their introductory guide to Rails? Basically I'd like to hear first-hand accounts of people who went through this "Ruby-to-Rails" learning phase recently and which materials were useful to you.

    Read the article

  • Unit testing statically typed functional code

    - by back2dos
    I wanted to ask you people, in which cases it makes sense to unit test statically typed functional code, as written in haskell, scala, ocaml, nemerle, f# or haXe (the last is what I am really interested in, but I wanted to tap into the knowledge of the bigger communities). I ask this because from my understanding: One aspect of unit tests is to have the specs in runnable form. However when employing a declarative style, that directly maps the formalized specs to language semantics, is it even actually possible to express the specs in runnable form in a separate way, that adds value? The more obvious aspect of unit tests is to track down errors that cannot be revealed through static analysis. Given that type safe functional code is a good tool to code extremely close to what your static analyzer understands. However a simple mistake like using x instead of y (both being coordinates) in your code cannot be covered. However such a mistake could also arise while writing the test code, so I am not sure whether its worth the effort. Unit tests do introduce redundancy, which means that when requirements change, the code implementing them and the tests covering this code must both be changed. This overhead of course is about constant, so one could argue, that it doesn't really matter. In fact, in languages like Ruby it really doesn't compared to the benefits, but given how statically typed functional programming covers a lot of the ground unit tests are intended for, it feels like it's a constant overhead one can simply reduce without penalty. From this I'd deduce that unit tests are somewhat obsolete in this programming style. Of course such a claim can only lead to religious wars, so let me boil this down to a simple question: When you use such a programming style, to which extents do you use unit tests and why (what quality is it you hope to gain for your code)? Or the other way round: do you have criteria by which you can qualify a unit of statically typed functional code as covered by the static analyzer and hence needs no unit test coverage?

    Read the article

  • Can Near Field Communications (NFC) Benefit your Supply Chain?

    - by Stephen Slade
    Leading firms continue to leverage the latest tools and technologies to drive performance especially around minimizing transaction costs. With razor thin margins in manufacturing and distribution, the leading producers are resorting to Near Field Communications to gain efficiencies.  In this week’s CIO magazine (Apr1, 2012, pg.30, see http://www.cio.com)  Lauren Brousell talks of the things you need to know to make a more informed decision with NFC.  Sandy Shen of Gartner says NFC appeals because "it supports any services that requires data transfer and authentication' 1. NFC is Cheap and Easy - short range transmitting technology connecting smartphones to data transfer. 2. Adoption Seems Inevitable - more merchants will use NCF for payments in the futures. Wallets are becoming obsolete. 3. It's a Hot Potato for Enterprise - Business with credit card companies and cell phone providers are debating who handles the billing process. 4. It's in use Overseas. Japan uses FeliCa to pay by smartphone. In the US, billing agreements are causing territorial conflict. 5. Security Risks Come Standard. As people lose HH devices, security will be an ongoing concern. Credentials and timeout features and alleviate to some extent. My prediction: In 5 years, we won't have wallets in our pockets.  Our secure and all-powerful smart phones will be our electronic portable banks and execute the transaction for us based on our preferences and propensities and electronically execute the transaction for the supply chain.

    Read the article

  • Packages not showing up in created APT repository

    - by David
    I created an APT repository using deb-scanpackages, and it seemed to go well. When I did a apt-get update on another server, the Packages.gz file was retrieved, and all seemed well - until I went to search for the packages contained in that repository (all packages are created locally). Several recommendations suggested reprepro; I tried that. Same result - except I had to rebuild the packages with the Priority and Section lines in the control file (nothing says this anywhere). The reprepro utility also generates a complicated directory structure which required rewriting the repository entry on the requesting server. I then found that the arch directory referenced i386 and not amd64 (which was requested by the requesting server). Is it possible that the AMD64 system isn't seeing packages compiled for i386? Searching the *Packages files in /var/lib/apt/lists show that the only packages for i386 are those I added (the other files are for the server - Ubuntu 10.04.2 LTS). The server the packages were built on is Ubuntu 10.04.2 LTS i686; the requesting server is x86_64. I found some discussion at the Debian AMD64FAQ but it claims to be obsolete. It makes mention of an extended syntax for repository listings for APT, and a command dpkg-subarchitecture - neither of which work on the local AMD64 server. Do I have to build two different sets of packages?

    Read the article

  • JEE frameworks, a road map to learn? and should I learn them?

    - by vibhor
    Background Information I have been into programming since past 1 years professionally, my day to day work includes writing BIRT reports, designing and validating forms using JEE (struts/spring, hibernate). I don't have a comp Sci 4 year degree (Electronics), so I have very Limited experience in comp Sci. Question JEE frameworks (struts1/2, spring, hibernate etc) are hot nowadays, however java world have a tendency of building A4j, B4J... mayway4J kind of stuff (and I am tired of it). AFAIK, frameworks are nothing but bunch of XML config files and hundreds of classes built to cram (by developer). And sooner then later a new framework come into picture that says I am the best among all. So My Question is - 1.What will you do to learn a framework (many frameworks) considering that it can be obsolete till you'll be master in it (Learning frameworks can take significant amount of time)? 2.Considering early into your career, will you give a damn that how well someone knows framework (knowing frame work is important but still..) and why/how should I learn a framework knowing I have to (un)learn it in order to learn other one (plenty of of 4Js....)? I am just trying to get a big picture, that, if you're in place of me, what would be your learning/cramming strategy (Road map)? I am not intended to start a holy war between A versus B, (frameworks are more or less essential).

    Read the article

  • developers-designers-testers interaction [closed]

    - by user29124
    Sorry for my bad English, and also you may not read this and waste your time, because it is just a lament of layman developer... Seems no one want to learn anything at my workplace. We have Mantis bug tracker, but our testers use google-docs for reports and only developers and team lead report bugs in Mantis. We have SVN for version control and use Smarty as template system, but our designers give us pure HTML (sometimes it's ugly for programmers, but mostly it's OK) in archives, and changes to design made by programmers go nowhere (I mean designers use their own obsolete HTML and CSS most of the time). We have a testing environment but designers don't have access with restricted accounts to it. So we can only ask them where to look for the problem and then investigate the problem by ourselves (and made changes to CSS by ourselves (that go nowhere most of the time...)). I will not mention legacy code without documentation, tests, or any requirements, just an absence of real interaction in triangle programmers-designers-testers. I'm not talking about using HAML, SASS, continuous integration, or something else, just about using basic tools by all participants of the development process. Maybe the absence of communication is not a problem in short-time projects, which will finish up in 2 months time but rather on the types of projects that lasts for years. Any comments please...

    Read the article

  • What is the correct way to restart udev in Ubuntu?

    - by zerkms
    I've changed the name of my eth1 interface to eth0. How to ask udev now to re-read the config? service udev restart and udevadm control --reload-rules don't help. So is there any valid way except of rebooting? (yes, reboot helps with this issue) UPD: yes, I know I should prepend the commands with sudo, but either one I posted above changes nothing in ifconfig -a output: I still see eth1, not eth0. UPD 2: I just changed the NAME property of udev-rule line. Don't know any reason for this to be ineffective. There is no any error in executing of both commands I've posted above, but they just don't change actual interface name in ifconfig -a output. If I perform reboot - then interface name changes as expected. UPD 3: let I explain all the case better ;-) For development purposes I write some script that clones virtual machines (VirtualBox-driven) and pre-sets them up in some way. So I perform a command to clone VM, start it and as long as network interface MAC is changed - udev adds the second rule to network persistent rules. Right after machine is booted for the first time there are 2 rules: eth0, which does not exist, as long as it existed in the original VM image MAC eth1, which exists, but all the configuration in all files refers to eth0, so it is not that good for me So I with sed delete the line with eth0 (it is obsolete and useless in cloned image) and replace eth1 with eth0. So currently I have valid persistent rule, but there is still eth1 in /dev. The issue: I don't want to reboot the machine (it will take another time, which is not good thing on building-VM-stage) and just want to have my /dev rebuilt with some command so I have ready-to-use VM without any reboots.

    Read the article

  • Which .NET REST approach/technology/tool should I use?

    - by SonOfPirate
    I am implementing a RESTful web service and several client applications that are mostly in Silverlight. I am finding a litany of options for developing both the server-side and client-side of the API but am not sure which is the best approach. I'm concerned about stability as well as a platform that will continue to exist a few months from now. We started using the REST Starter Kit with .NET 3.5 but moved to the new WCF Web API when updating to .NET 4.0. All of their documentation indicates that WCF Web API is the replacement for the RSK. However, Web API is only in Preview 4 and does not include support for Silverlight or Windows Phone 7 clients (yet). WCF Web API looks like a wrapper on top of the WCF WebHttp Services stuff provided in the System.ServiceModel.Web library which makes me think that maybe it would be simpler to just go with the built-in stuff but Web API does offer some nice features. I am specifically tied-up trying to determine the best course for the client-side. My main requirement is that I need to support deserializing into my client-side objects quickly and easily. The Web API offers a nice client library but doesn't have a Silverlight version. I'd like to use the latest approach and the toolset that is being actively developed and supported. Is the REST Starter Kit really obsolete? Has anyone had any success implementing the WCF Web API toolkit? Is there merit to using either of these over the built-in WCF WebHttp Services features found in System.ServiceModel.Web? Is there a single solution that works for any client (web, Silverlight, etc.)? What suggestions do you have?

    Read the article

< Previous Page | 3 4 5 6 7 8 9 10 11 12 13 14  | Next Page >