Search Results

Search found 10293 results on 412 pages for 'history js'.

Page 18/412 | < Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >

  • JS file called twice in Wordpress

    - by Dxr Tw
    I'm trying to reduce number of requests by reducing number of JS files on WP site. I successfully combined around 7 javascript files into one (site.js). Now, I'm using a plugin which has its own JS file (pluginA.js), I want to include (pluginA.js) in that site.js. However, if I simply copy pluginA JS content to site.js and then change location to /files/site.js, firebug's NET tab shows that site.js is requested/called twice. I presume this is due to wp_enqueue_script. How can I make it not call site.js second time but just look into already loaded site.js? Maybe there's alternative to wp_enqueue_script? Plugin's php file: add_action( 'wp_enqueue_scripts', 'testplugin_scripts'); function testplugin_scripts() { /*global $testplugin_version; */ $default_selector = 'li:has(ul) > a'; $default_selector_leaf = 'li li li:not(:has(ul)) > a'; wp_enqueue_scripts('test-plugin', site_url('/files/site.js', __FILE__), array('jquery'), $testplugin_version); $params = array( 'selector' => apply_filters('testplugin_selector', $default_selector), 'selector_leaf' => apply_filters('testplugin_selector_leaf', $default_selector_leaf) ); wp_localize_script('test-plugin', 'testplugin_params', $params); }

    Read the article

  • Rewriting git history to convert master branch to development branch?

    - by gct
    I'm looking to rewrite my git repo to use a new branching model I came across: http://nvie.com/git-model But right now all my history lives in the master branch. I'd like to rewrite it (possible using git-filter-branch?) So that all that history is in a branch called development now. Is this possible? It's definitely beyond my limited git skills.

    Read the article

  • How do I host node.js apps with pm2 without running them as root?

    - by jishi
    I have setup pm2 to run a node.js application, and I can successfully start it and it will resurrect upon reboot. However, the pm2 daemon is ran as root, which makes me think that all my node-scripts also runs as root? Even though I added them as a regular user in the system. The log files and stuff is created in the users home dir, /~/.pm2/logs, but the logs are owned by root. when I invoke pm2 startup (which handles the installation of the init.d script etc), it creates /etc/init.d/pm2-init.sh which looks like this: #!/bin/bash # chkconfig: 2345 98 02 # # description: PM2 next gen process manager for Node.js # processname: pm2 # ### BEGIN INIT INFO # Provides: pm2 # Required-Start: # Required-Stop: # Should-Start: # Should-Stop: # Default-Start: 2 3 4 5 # Default-Stop: 0 1 6 # Short-Description: PM2 init script # Description: PM2 is the next gen process manager for Node.js ### END INIT INFO NAME=pm2 PM2=/usr/local/lib/node_modules/pm2/bin/pm2 NODE=/usr/local/bin/node export HOME="/root" start() { echo "Starting $NAME" $NODE $PM2 stopAll $NODE $PM2 resurrect } stop() { $NODE $PM2 dump $NODE $PM2 stopAll } restart() { echo "Restarting $NAME" stop start } status() { echo "Status for $NAME:" $NODE $PM2 list RETVAL=$? } case "$1" in start) start ;; stop) stop ;; status) status ;; restart) restart ;; *) echo "Usage: {start|stop|status|restart}" exit 1 ;; esac exit $RETVAL When I dump the processes (which is what it will use when resurrecting the processes), I see mentions of user "USER":"pi" but I don't think that it's actually run as user pi. Any thoughts?

    Read the article

  • WebLogic history an interview with Laurie Pitman by Qualogy

    - by JuergenKress
    All those years that I am working with WebLogic, the BEA and Oracle era are the most well known about WebLogic evolving into a worldwide Enterprise platform for Java applications, being used by multinationals around the globe. But how did it all begin? Besides from the spare info you find on some Internet pages, I was eager to hear it in person from one of the founders of WebLogic back in 1995, before the BEA era, Laurie Pitman. Four young people, Carl Resnikoff, Paul Ambrose, Bob Pasker, and Laurie Pitman, became friends and colleagues about the time of the first release of Java in 1995. Between the four of them, they had an MA in American history, an MA in piano, an MS in library systems, a BS in chemistry, and a BS in computer science. They had come together kind of serendipitously, interested in building some web tools exclusively in Java for the emerging Internet web application market. They found many things to like about each other, some overlap in our interests, but also a lot of well-placed differences which made a partnership particularly interesting. They made it formal in January 1996 by incorporating. Read the complete article here. WebLogic Partner Community For regular information become a member in the WebLogic Partner Community please visit: http://www.oracle.com/partners/goto/wls-emea ( OPN account required). If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Wiki Technorati Tags: WebLogic history,Qualogy,WebLogic,WebLogic Community,Oracle,OPN,Jürgen Kress

    Read the article

  • Learn How Ancestry.com Helps Families Uncover Their History with Oracle WebCenter

    - by Christie Flanagan
    Delivering Exceptional Online Customer ExperiencesAncestry.com is the world’s largest online family history resource, providing an engaging and interactive customer experience to more than 1.7 million members. With smart search technology, a wealth of learning resources, and a worldwide community of family history enthusiasts, Ancestry.com helps people discover their roots and tell their unique family stories. Key to Ancestry.com’s success has been the delivery of an online customer experience that converts site visitors into paying subscribers and keeps them coming back. To help achieve this goal, Ancestry.com turned to Oracle’s Web experience management solution, Oracle WebCenter Sites. Join us as executives from Ancestry.com and Oracle discuss how Oracle’s Web experience management solution is helping them deliver engaging online experiences. Learn how: Ancestry.com selected Oracle WebCenter Sites to meet their demanding Web experience management requirements The company was able to get up and running quickly despite a complex technology stack and challenging integration requirements with legacy systems Ancestry.com empowered business users to manage the online experience and significantly reduce time to market for their online campaigns and initiatives Register now for the Webcast. REGISTER NOW Thursday,June 28, 201210 a.m. PT / 1 p.m. ET Presented by: Blane Nelson Chief Architect–Applications,Ancestry.com Christie FlanaganDirector of Product Marketing, Oracle WebCenter Sites,Oracle

    Read the article

  • Concatenate & Minify JS on the fly OR at build time - ASP.NET MVC

    - by Charlino
    As an extension to this question here Linking JavaScript Libraries in User Controls I was after some examples of how people are concatinating & minifying javascript on the fly OR at build time. I would also like to see how it then works into your master pages. I don't mind page specific files being minified and linked inidividually as they currently are (see below) but all the js files on the main master page (I have about 5 or 6) I would like concatenated and minified. Bonus points for anyone who also incorporates CSS concatenation & minification! :-) Current master page with the common js files that I would like concatenated & minified: <%@ Master Language="C#" Inherits="System.Web.Mvc.ViewMasterPage" %> <head runat="server"> ... BLAH ... <asp:ContentPlaceHolder ID="AdditionalHead" runat="server" /> ... BLAH ... <%= Html.CSSBlock("/styles/site.css") %> <%= Html.CSSBlock("/styles/jquery-ui-1.7.1.css") %> <%= Html.CSSBlock("/styles/jquery.lightbox-0.5.css") %> <%= Html.CSSBlock("/styles/ie6.css", 6) %> <%= Html.CSSBlock("/styles/ie7.css", 7) %> <asp:ContentPlaceHolder ID="AdditionalCSS" runat="server" /> </head> <body> ... BLAH ... <%= Html.JSBlock("/scripts/jquery-1.3.2.js", "/scripts/jquery-1.3.2.min.js") %> <%= Html.JSBlock("/scripts/jquery-ui-1.7.1.js", "/scripts/jquery-ui-1.7.1.min.js") %> <%= Html.JSBlock("/scripts/jquery.validate.js", "/scripts/jquery.validate.min.js") %> <%= Html.JSBlock("/scripts/jquery.lightbox-0.5.js", "/scripts/jquery.lightbox-0.5.min.js") %> <%= Html.JSBlock("/scripts/global.js", "/scripts/global.min.js") %> <asp:ContentPlaceHolder ID="AdditionalJS" runat="server" /> </body> Used in a page like this (which I'm happy with): <asp:Content ID="signUpContent" ContentPlaceHolderID="AdditionalJS" runat="server"> <%= Html.JSBlock("/scripts/pages/account.signup.js", "/scripts/pages/account.signup.min.js") %> </asp:Content> EDIT: What I'm using now Since asking this question, Microsoft have released their own JS & CSS compression library called Microsoft AJAX Minifier, I'd definitely recommend checking it out. It includes MSBuild tasks which are the duck's nuts.

    Read the article

  • NodeJS and node-mongodb-native

    - by w1nk
    Just getting started with node, and trying to get the mongo driver to work. I've got my connection set up, and oddly I can insert things just fine, however calling find on a collection produces craziness. var db = new mongo.Db('things', new mongo.Server('192.168.2.6',mongo.Connection.DEFAULT_PORT, {}), {}); db.open(function(err, db) { db.collection('things', function(err, collection) { // collection.insert(row); collection.find({}, null, function(err, cursor) { cursor.each(function(err, doc) { sys.puts(sys.inspect(doc,true)); }); }); }); }); If I uncomment the insert and comment out the find, it works a treat. The inverse unfortunately doesn't hold, I receive this error: collection.find({}, null, function(err, cursor) { ^ TypeError: Cannot call method 'find' of null I'm sure I'm doing something silly, but for the life of me I can't find it...

    Read the article

  • When do I need to use, or not use, .datum when appending an svg element

    - by Bobby Gifford
    svg = d3.select("#viz").append("svg").datum(data) //I often see .datum when an area chart is used. Are there any rules of thumb for when .datum is needed? var area = d3.svg.area() .x(function(d) { return x(d.x); }) .y0(height) .y1(function(d) { return y(d.y); }); var svg = d3.select("body").append("svg") .attr("width", width) .attr("height", height); svg.append("path") .datum(data) .attr("d", area);

    Read the article

  • custom helpers inside each block

    - by Unspecified
    myArray = [{name: "name1", age: 20}, {name: "name2", age:22}]; {{#each person in myArray}} {{#myHelper person}} Do something {{/myHelper}} {{/each}} Handlebars.registerHelper(function(context, options){ if(context.age > 18){ return options.fn(this); }else{ return options.inverse(this); } }) In the above code when I tried to debug my custom helper it shows the context="person" while I want the context to be the person object, what's wrong with my code ? I found a similar question here but did not get it either...

    Read the article

  • Backbone: Easiest way to maintain reference to 'this' for a Model inside callbacks

    - by Garrett
    var JavascriptHelper = Backbone.Model.extend("JavascriptHelper", {}, // never initialized as an instance { myFn: function() { $('.selector').live('click', function() { this.anotherFn(); // FAIL! }); }, anotherFn: function() { alert('This is never called from myFn()'); } } ); The usual _.bindAll(this, ...) approach won't work here because I am never initializing this model as an instance. Any ideas? Thanks.

    Read the article

  • What are the differences between these three patterns of "class" definitions in JavaScript?

    - by user1889765
    Are there any important/subtle/significant differences under the hood when choosing to use one of these three patterns over the others? And, are there any differences between the three when "instantiated" via Object.create() vs the new operator? The pattern that CoffeeScript uses when translating "class" definitions: Animal = (function() { function Animal(name) { this.name = name; } Animal.prototype.move = function(meters) { return alert(this.name + (" moved " + meters + "m.")); }; return Animal; })(); and The pattern that Knockout seems to promote: var DifferentAnimal = function(name){ var self = this; self.name = name; self.move = function(meters){ return alert(this.name + (" moved " + meters + "m.")); }; return {name:self.name, move:self.move}; } and The pattern that Backbone promotes: var OneMoreAnimal= ClassThatAlreadyExists.extend({ name:'', move:function(){} });

    Read the article

  • Interpolation on Cubism graphs

    - by Abe Stanway
    Cubism was designed, by mbostock's own words, for maximum information density - which means it generally wants to display one datapoint per pixel. While this is useful in many cases, it doesn't help when your data itself is not that dense. In these cases, you get ugly, staccato-style graphs like so: Is there a way to interpolate my data/graph within Cubism to show a nice, smoothed graph? EDIT: After adding keepLastValue to the metric, I get this: Here is the same data as shown in Graphite: I would like to smooth the Cubism view to look more like Graphite (with the added awesomeness of the horizon overplotting)

    Read the article

  • Unable to call views

    - by Scott
    I'm using Sails and I when I attempt to call the login action of my UsersController. I know my routing is working, because the console.log successfully logs both the loginpassword and loginname. However, the res.view() doesn't work. Neither does returning res.view(). module.exports = { create: function(req, res) { }, destroy: function(req, res) { }, login: function(req, res) { var loginname = req.param("loginname"); var loginpassword = req.param("loginpassword"); console.log(loginname + ' ' + loginpassword); res.view(); }, logout: function(req, res) { }, _config: {} }; I have a /views/user/login.ejs and all it currently contains is a header block with some test text, but I can't get that to render at all. Any thoughts?

    Read the article

  • SQL SERVER – SSMS: Database Consistency History Report

    - by Pinal Dave
    Doctor and Database The last place I like to visit is always a hospital. With the monsoon season starting, intermittent rains, it has become sort of a routine to get a cycle of fever every other year (seriously I hate it). So when I visit my doctor, it is always interesting in the way he quizzes me. The routine question of – “How many days have you had this?”, “Is there any pattern?”, “Did you drench in rain?”, “Do you have any other symptom?” and so on. The idea here is that the doctor wants to find any anomaly or a pattern that will guide him to a viral or bacterial type. Most of the time they get it based on experience and sometimes after a battery of tests. So if there is consistent behavior to your problem, there is always a solution out. SQL Server has its way to find if the server data / files are in consistent state using the DBCC commands. Back to SQL Server In real life, Database consistency check is one of the critical operations a DBA generally doesn’t give much priority. Many readers of my blogs have asked many times, how do we know if the database is consistent? How do I read output of DBCC CHECKDB and find if everything is right or not? My common answer to all of them is – look at the bottom of checkdb (or checktable) output and look for below line. CHECKDB found 0 allocation errors and 0 consistency errors in database ‘DatabaseName’. Above is a “good sign” because we are seeing zero allocation and zero consistency error. If you are seeing non-zero errors then there is some problem with the database. Sample output is shown as below: CHECKDB found 0 allocation errors and 2 consistency errors in database ‘DatabaseName’. repair_allow_data_loss is the minimum repair level for the errors found by DBCC CHECKDB (DatabaseName). If we see non-zero error then most of the time (not always) we get repair options depending on the level of corruption. There is risk involved with above option (repair_allow_data_loss), that is – we would lose the data. Sometimes the option would be repair_rebuild which is little safer. Though these options are available, it is important to find the root cause to the problem. In standard report, there is a report which can show the history of checkdb executed for the selected database. Since this is a database level report, we need to right click on database, click Reports, click Standard Reports and then choose “Database Consistency History” report. The information in this report is picked from default trace. If default trace is disabled or there is no checkdb run or information is not there in default trace (because it’s rolled over), we would get report like below. As we can see report says it very clearly: Currently, no execution history of CHECKDB is available or default trace is not enabled. To demonstrate, I have caused corruption in one of the database and did below steps. Run CheckDB so that errors are reported. Fix the corruption by losing the data using repair option Run CheckDB again to check if corruption is cleared. After that I have launched the report and below is what we would see. If you are lazy like me and don’t want to run the report manually for each database then below query would be handy to provide same report for all database. This query is runs behind the scenes by the report. All I have done is remove the filter for database name (at the last – highlighted). DECLARE @curr_tracefilename VARCHAR(500); DECLARE @base_tracefilename VARCHAR(500); DECLARE @indx INT; SELECT @curr_tracefilename = path FROM sys.traces WHERE is_default = 1; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SELECT @indx  = PATINDEX('%\%', @curr_tracefilename) ; SET @curr_tracefilename = REVERSE(@curr_tracefilename); SET @base_tracefilename = LEFT( @curr_tracefilename,LEN(@curr_tracefilename) - @indx) + '\log.trc'; SELECT  SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),36, PATINDEX('%executed%',TEXTData)-36) AS command ,       LoginName ,       StartTime ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%found%',TEXTData) +6,PATINDEX('%errors %',TEXTData)-PATINDEX('%found%',TEXTData)-6)) AS errors ,       CONVERT(INT,SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%repaired%',TEXTData) +9,PATINDEX('%errors.%',TEXTData)-PATINDEX('%repaired%',TEXTData)-9)) repaired ,       SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%time:%',TEXTData)+6,PATINDEX('%hours%',TEXTData)-PATINDEX('%time:%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%hours%',TEXTData) +6,PATINDEX('%minutes%',TEXTData)-PATINDEX('%hours%',TEXTData)-6)+':'+SUBSTRING(CONVERT(NVARCHAR(MAX),TEXTData),PATINDEX('%minutes%',TEXTData) +8,PATINDEX('%seconds.%',TEXTData)-PATINDEX('%minutes%',TEXTData)-8) AS time FROM::fn_trace_gettable( @base_tracefilename, DEFAULT) WHERE EventClass = 22 AND SUBSTRING(TEXTData,36,12) = 'DBCC CHECKDB' -- AND DatabaseName = @DatabaseName; Don’t get worried about the logic above. All it is doing is reading the trace files, parsing below entry and getting out information for underlined words. DBCC CHECKDB (CorruptedDatabase) executed by sa found 2 errors and repaired 0 errors. Elapsed time: 0 hours 0 minutes 0 seconds.  Internal database snapshot has split point LSN = 00000029:00000030:0001 and first LSN = 00000029:00000020:0001. Hopefully now onwards you would run checkdb and understand the importance of it. As responsible DBAs I am sure you are already doing it, let me know how often do you actually run them on you production environment? Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL Tagged: SQL Reports

    Read the article

  • Tutorial: Getting Started with the NoSQL JavaScript / Node.js API for MySQL Cluster

    - by Mat Keep
    Tutorial authored by Craig Russell and JD Duncan  The MySQL Cluster team are working on a new NoSQL JavaScript connector for MySQL. The objectives are simplicity and high performance for JavaScript users: - allows end-to-end JavaScript development, from the browser to the server and now to the world's most popular open source database - native "NoSQL" access to the storage layer without going first through SQL transformations and parsing. Node.js is a complete web platform built around JavaScript designed to deliver millions of client connections on commodity hardware. With the MySQL NoSQL Connector for JavaScript, Node.js users can easily add data access and persistence to their web, cloud, social and mobile applications. While the initial implementation is designed to plug and play with Node.js, the actual implementation doesn't depend heavily on Node, potentially enabling wider platform support in the future. Implementation The architecture and user interface of this connector are very different from other MySQL connectors in a major way: it is an asynchronous interface that follows the event model built into Node.js. To make it as easy as possible, we decided to use a domain object model to store the data. This allows for users to query data from the database and have a fully-instantiated object to work with, instead of having to deal with rows and columns of the database. The domain object model can have any user behavior that is desired, with the NoSQL connector providing the data from the database. To make it as fast as possible, we use a direct connection from the user's address space to the database. This approach means that no SQL (pun intended) is needed to get to the data, and no SQL server is between the user and the data. The connector is being developed to be extensible to multiple underlying database technologies, including direct, native access to both the MySQL Cluster "ndb" and InnoDB storage engines. The connector integrates the MySQL Cluster native API library directly within the Node.js platform itself, enabling developers to seamlessly couple their high performance, distributed applications with a high performance, distributed, persistence layer delivering 99.999% availability. The following sections take you through how to connect to MySQL, query the data and how to get started. Connecting to the database A Session is the main user access path to the database. You can get a Session object directly from the connector using the openSession function: var nosql = require("mysql-js"); var dbProperties = {     "implementation" : "ndb",     "database" : "test" }; nosql.openSession(dbProperties, null, onSession); The openSession function calls back into the application upon creating a Session. The Session is then used to create, delete, update, and read objects. Reading data The Session can read data from the database in a number of ways. If you simply want the data from the database, you provide a table name and the key of the row that you want. For example, consider this schema: create table employee (   id int not null primary key,   name varchar(32),   salary float ) ENGINE=ndbcluster; Since the primary key is a number, you can provide the key as a number to the find function. function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find('employee', 0, onData); }; function onData = function(err, data) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(data));   ... use data in application }; If you want to have the data stored in your own domain model, you tell the connector which table your domain model uses, by specifying an annotation, and pass your domain model to the find function. var annotations = new nosql.Annotations(); function Employee = function(id, name, salary) {   this.id = id;   this.name = name;   this.salary = salary;   this.giveRaise = function(percent) {     this.salary *= percent;   } }; annotations.mapClass(Employee, {'table' : 'employee'}); function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData); }; Updating data You can update the emp instance in memory, but to make the raise persistent, you need to write it back to the database, using the update function. function onData = function(err, emp) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp); // oops, session is out of scope here }; Using JavaScript can be tricky because it does not have the concept of block scope for variables. You can create a closure to handle these variables, or use a feature of the connector to remember your variables. The connector api takes a fixed number of parameters and returns a fixed number of result parameters to the callback function. But the connector will keep track of variables for you and return them to the callback. So in the above example, change the onSession function to remember the session variable, and you can refer to it in the onData function: function onSession = function(err, session) {   if (err) {     console.log(err);     ... error handling   }   session.find(Employee, 0, onData, session); }; function onData = function(err, emp, session) {   if (err) {     console.log(err);     ... error handling   }   console.log('Found: ', JSON.stringify(emp));   emp.giveRaise(0.12); // gee, thanks!   session.update(emp, onUpdate); // session is now in scope }; function onUpdate = function(err, emp) {   if (err) {     console.log(err);     ... error handling   } Inserting data Inserting data requires a mapped JavaScript user function (constructor) and a session. Create a variable and persist it: function onSession = function(err, session) {   var data = new Employee(999, 'Mat Keep', 20000000);   session.persist(data, onInsert);   } }; Deleting data To remove data from the database, use the session remove function. You use an instance of the domain object to identify the row you want to remove. Only the key field is relevant. function onSession = function(err, session) {   var key = new Employee(999);   session.remove(Employee, onDelete);   } }; More extensive queries We are working on the implementation of more extensive queries along the lines of the criteria query api. Stay tuned. How to evaluate The MySQL Connector for JavaScript is available for download from labs.mysql.com. Select the build: MySQL-Cluster-NoSQL-Connector-for-Node-js You can also clone the project on GitHub Since it is still early in development, feedback is especially valuable (so don't hesitate to leave comments on this blog, or head to the MySQL Cluster forum). Try it out and see how easy (and fast) it is to integrate MySQL Cluster into your Node.js platforms. You can learn more about other previewed functionality of MySQL Cluster 7.3 here

    Read the article

  • The Evolution of Computer Keyboards

    - by Jason Fitzpatrick
    While the basic shape of keyboards has remained largely unchanged over the last thirty years, the guts have undergone several transformations. Read on to explore the history of the computer keyboard. ComputerWorld delves into the history of the modern keyboard, including the heavy influence IBM’s extensive keyboard research on early keyboards: As far as direct influences on the modern computer keyboard, IBM’s Selectric typewriter was one of the biggest. IBM released the first model of its iconic electromechanical typewriter in 1961, a time when being able to type fast and accurately was a highly sought-after skill. Dag Spicer, senior curator at the Computer History Museum, notes that as the Selectric models rose to prominence, admins grew to love the feel of the keyboard because of IBM’s dogged focus on making the ergonomics comfortable. “IBM’s probably done more than anyone to find [keyboard] ergonomics that work for everyone,” Spicer says. So when the PC hit the scene a decade or two later, the Selectric was largely viewed as the baseline to design keyboards for those newfangled computers you could put in your office or home. Hit up the link below to continue reading about how the Selectric influenced keyboards throughout the 1980s and what replaced the crisp clacking of early IBM-styled models. 6 Ways Windows 8 Is More Secure Than Windows 7 HTG Explains: Why It’s Good That Your Computer’s RAM Is Full 10 Awesome Improvements For Desktop Users in Windows 8

    Read the article

  • How does VS 2005 provide history across all TFS Team Projects when tf.exe cannot?

    - by AakashM
    In Visual Studio 2005, in the TFS Source Control Explorer, these is a top-level node for the TFS Server itself, with a child node for each Team Project. Right-clicking either the server node or the node for a Team Project gives a context menu on which there is a View History item. Selecting this gives you a History window showing the last 200 or so changesets, either for the specific Team Project chosen, or across all Team Projects. It is this history across all Team Projects that I am wondering about. The command-line tf.exe history command provides (as I understand it) basically the same functionality as is provided by the VS TFS Source Control plug-in. But I cannot work out how to get tf.exe history to provide this across-all-Team-Projects history. At a command line, supposing I have C:\ mapped as the root of my workspace, and Foo, Bar, and Baz as Team Projects, I can do C:\> tf history Foo /recursive /stopafter:200 to get the last 200 changesets that affected Team Project Foo; or from within a Team Project folder C:\Bar> tf history *.* /recursive /stopafter:200 which does the same thing for Team Project Bar - note that the wildcard *.* is allowed here. However, none of these work (each gives the error message shown): C:\> tf history /recursive /stopafter:200 The history command takes exactly one item C:\> tf history *.* /recursive /stopafter:200 Unable to determine the source control server C:\> tf history *.* /server:servername /recursive /stopafter:200 Unable to determine the workspace I don't see an option in the docs for tf for specifying a workspace; it seems to only want to determine it from the current folder. So what is VS 2005 doing? Is it internally doing a history on each Team Project in turn and then sticking the results together?? note also that I have tried with Power Tools; tfpt history from the command line gives exactly the same error messages seen here

    Read the article

  • batch file to merge .js files from subfolders into one combined file

    - by Andrew Johns
    I'm struggling to get this to work. Plenty of examples on the web, but they all do something just slightly different to what I'm aiming to do, and every time I think I can solve it, I get hit by an error that means nothing to me. After giving up on the JSLint.VS plugin, I'm attempting to create a batch file that I can call from a Visual Studio build event, or perhaps from cruise control, which will generate JSLint warnings for a project. The final goal is to get a combined js file that I can pass to jslint, using: cscript jslint.js < tmp.js which would validate that my scripts are ready to be combined into one file for use in a js minifier, or output a bunch of errors using standard output. but the js files that would make up tmp.js are likely to be in multiple subfolders in the project, e.g: D:\_projects\trunk\web\projectname\js\somefile.debug.js D:\_projects\trunk\web\projectname\js\jquery\plugins\jquery.plugin.js The ideal solution would be to be able to call a batch file along the lines of: jslint.bat %ProjectPath% and this would then combine all the js files within the project into one temp js file. This way I would have flexibility in which project was being passed to the batch file. I've been trying to make this work with copy, xcopy, type, and echo, and using a for do loop, with dir /s etc, to make it do what I want, but whatever I try I get an error.

    Read the article

  • Are all of the default scripts loaded by Magento really needed?

    - by pxl
    Here's a listing of all the scripts loaded by Magento by default: ../js/prototype/prototype.js //prototype library ../js/prototype/validation.js //don't know what this does ../js/scriptaculous/builder.js //don't know what this does ../js/scriptaculous/effects.js //base scriptaculous effects library? ../js/scriptaculous/dragdrop.js //component of scriptaculous effects ../js/scriptaculous/controls.js //not sure? ../js/scriptaculous/slider.js //more scriptaculous effects ../js/varien/js.js //don't know what this is ../js/varien/form.js //form validation scripts? ../js/varien/menu.js //menu/drop down menu scripts ../js/mage/translate.js //don't know what this does ../js/mage/cookies.js //don't know what this does these scripts total 316.8K of javascript... all in various states of being minified (for example, prototype.js isn't minified). So my first question: 1) Aside from prototype.js, are all of the others really that needed? and 2) What is the "correct" way to remove these scripts? Layout updates? Or hardcoded in templates? I want to make the loading of my magento site as light weight as possible. thanks!

    Read the article

  • Book Review (Book 10) - The Information: A History, a Theory, a Flood

    - by BuckWoody
    This is a continuation of the books I challenged myself to read to help my career - one a month, for year. You can read my first book review here, and the entire list is here. The book I chose for March 2012 was: The Information: A History, a Theory, a Flood by James Gleick. I was traveling at the end of last month so I’m a bit late posting this review here. Why I chose this book: My personal belief about computing is this: All computing technology is simply re-arranging data. We take data in, we manipulate it, and we send it back out. That’s computing. I had heard from some folks about this book and it’s treatment of data. I heard that it dealt with the basics of data - and the semantics of data, information and so on. It also deals with the earliest forms of history of information, which fascinates me. It’s similar I was told, to GEB which a favorite book of mine as well, so that was a bonus. Some folks I talked to liked it, some didn’t - so I thought I would check it out. What I learned: I liked the book. It was longer than I thought - took quite a while to read, even though I tend to read quickly. This is the kind of book you take your time with. It does in fact deal with the earliest forms of human interaction and the basics of data. I learned, for instance, that the genesis of the binary communication system is based in the invention of telegraph (far-writing) codes, and that the earliest forms of communication were expensive. In fact, many ciphers were invented not to hide military secrets, but to compress information. A sort of early “lol-speak” to keep the cost of transmitting data low! I think the comparison with GEB is a bit over-reaching. GEB is far more specific, fanciful and so on. In fact, this book felt more like something fro Richard Dawkins, and tended to wander around the subject quite a bit. I imagine the author doing his research and writing each chapter as a book that followed on from the last one. This is what possibly bothered those who tended not to like it, I think. Towards the middle of the book, I think the author tended to be a bit too fragmented even for me. He began to delve into memes, biology and more - I think he might have been better off breaking that off into another work. The existentialism just seemed jarring. All in all, I liked the book. I recommend it to any technical professional, specifically ones involved with data technology in specific. And isn’t that all of us? :)

    Read the article

  • How to get the revision history of a branch with bzrlib

    - by David Planella
    I'm trying to get a list of committers to a bzr branch. I know I can get it through the command line with something along these lines: bzr log -n0 | grep committer | sed -e 's/^[[:space:]]*committer: //' | uniq However, I'd like to get that list programmatically with bzrlib. After having looked at the bzrlib documentation, I can't manage to find out how I would even get the full list of revisions from my branch. Any hints on how to get the full history of revisions from a branch with bzrlib, or ultimately, the list of committers?

    Read the article

  • Modifying Contiguous Time Periods in a History Table

    Alex Kuznetsov is credited with a clever technique for creating a history table for SQL that is designed to store contiguous time periods and check that these time periods really are contiguous, using nothing but constraints. This is now increasingly useful with the DATE data type in SQL Server. The modification of data in this type of table isn't always entirely intuitive so Alex is on hand to give a brief explanation of how to do it.

    Read the article

< Previous Page | 14 15 16 17 18 19 20 21 22 23 24 25  | Next Page >