Search Results

Search found 19061 results on 763 pages for 'load factor'.

Page 113/763 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Loader.php trying to load Doctrine classes, but we use Propel!

    - by kewpiedoll99
    We are finding cases where we get the following 500 error: File xyz.php does not exist or class "xyz" was not found in the file at () in SF_ROOT_DIR/lib/vendor/Zend/Loader.php line 107 ... where xyz == Memcache (when trying to use symfony cc on the command line) or sfDoctrineAdminGenerator (when using an old-ish AdminGenerator-generated CMS page). We use Propel, but Loader.php is trying to load classes used only for Doctrine. Currently I am using a filthy hack where I request Loader.php to check if the file is either of these two cases, and if so simply return rather than trying to load it. Obviously, this is unacceptable longer term. Has anybody encountered this, and how did you solve it? Edited to add: We have: class ProjectConfiguration extends sfProjectConfiguration { public function setup() { // for compatibility / remove and enable only the plugins you want $this->enableAllPluginsExcept(array('sfDoctrinePlugin')); } } And we have a propel.ini file in our top level config directory. This has only started in the past four weeks or so, and we've had a stable build for over a year now. I'm pretty sure Doctrine is totally disabled.

    Read the article

  • jquery: how can i load the Google Maps API via ajax?

    - by Haroldo
    Before you reply: this is not as straight foward as you'd expect! I have a 'show on map' button which when clicked opens a dialogbox/lightbox with the google map in. I don't want to load the maps api on pageload, just when a map has been requested This is php file the "show on map" button puts into the dialog box: <div id="map_canvas"></div> <script type="text/javascript"> $(function() { //google maps stuff var latlng = new google.maps.LatLng(<?php echo $coords ?>); var options = { zoom: 14, center: latlng, mapTypeControl: false, mapTypeId: google.maps.MapTypeId.ROADMAP }; var map = new google.maps.Map(document.getElementById('map_canvas'), options); var marker = new google.maps.Marker({ position: new google.maps.LatLng(<?php echo $coords ?>), map: map }); }) </script> I've been trying to load the API before ajaxing in the dialog like this: $('img.map').click(function(){ var rel = $(this).attr('rel'); $.getScript('http://maps.google.com/maps/api/js?sensor=false', function(){ $.fn.colorbox({ href:rel }) }); }) this doesn't seem to work :( i've also tried: adding <script src="http://maps.google.com/maps/api/js?sensor=false"></script> to the ajax file type="text/javascript" running $.getScript('http://maps.google.com/maps/api/js?sensor=false'); on doc.ready the problem the browser seems to be redirected to the api.js file - you see a white screen

    Read the article

  • Sitting Pretty

    - by Phil Factor
    Guest Editorial for Simple-Talk IT Pro newsletter'DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed' I dislike the term 'avatar', when used to describe a portrait photograph. An avatar, in the sense of a picture, is merely the depiction of one's role-play alter-ego, often a ridiculous bronze-age deity. However, professional image is important. The choice and creation of online photos has an effect on the way your message is received and it is important to get that right. It is fine to use that photo of you after ten lagers on holiday in an Ibiza nightclub, but what works on Facebook looks hilarious on LinkedIn. My splendid photograph that I use online was done by a professional photographer at great expense and I've never had the slightest twinge of regret when I remember how much I paid for it. It is me, but a more pensive and dignified edition, oozing trust and wisdom. One gasps at the magical skill that a professional photographer can conjure up, without digital manipulation, to make the best of a derisory noggin (ed: slang for a head). Even if he had offered to depict me as a semi-naked, muscle-bound, sword-wielding hero, I'd have demurred. No, any professional person needs a carefully cultivated image that looks right. I'd never thought of using that profile shot, though I couldn't help noticing the photographer flinch slightly when he first caught sight of my face. There is a problem with using an avatar. The use of a single image doesn't express the appropriate emotion. At the moment, it is weird to see someone with a laughing portrait writing something solemn. A neutral cast to the face, somewhat like a passport photo, is probably the best compromise. Actually, the same is true of a working life in IT. One of the first skills I learned was not to laugh at managers, but, instead, to develop a facial expression that promoted a sense of keenness, energy and respect. Every profession has its own preferred facial cast. A neighbour of mine has the natural gift of a face that displays barely repressed grief. Though he is characteristically cheerful, he earns a remarkable income as a pallbearer. DBAs and SysAdmins generally prefer an expression of calmness under adversity. It is a subtle trick, and requires practice in front of a mirror to get it just right. Too much adversity and they think you're not coping; too much calmness and they think you're under-employed. With an appropriate avatar, you could do away with a lot of the need for 'smilies' to give clues as to the meaning of what you've written on forums and blogs. If you had a set of avatars, showing the full gamut of human emotions expressible in writing: Rage, fear, reproach, joy, ebullience, apprehension, exasperation, dissembly, irony, pathos, euphoria, remorse and so on. It would be quite a drop-down list on forums, but given the vast prairies of space on the average hard drive, who cares? It would cut down on the number of spats in Forums just as long as one picks the right avatar. As an unreconstructed geek, I find it hard to admit to the value of image in the workplace, but it is true. Just as we use professionals to tidy up and order our CVs and job applications, we should employ experts to enhance our professional image. After all you don't perform surgery or dentistry on yourself do you?

    Read the article

  • Musings on the launch of SQL Monitor

    - by Phil Factor
    For several years, I was responsible for the smooth running of a large number of enterprise database servers. We ran a network monitoring tool that was primitive by today’s standards but which performed the useful function of polling every system, including all the Servers in my charge. It ran a configurable script for each service that you needed to monitor that was merely required to return one of a number of integer values. These integer values represented the pain level of the service, from 10 (“hurtin’ real bad”) to 1 (“Things is great”). Not only could you program the visual appearance of each server on the network diagram according to the value of the integer, but you could even opt to run a sound file. Very soon, we had a large TFT Screen, high on the wall of the server room, with every server represented by an icon, and a speaker next to it that would give out a series of grunts, groans, snores, shrieks and funeral marches, depending on the problem. One glance at the display, and you could dive in with iSQL/QA/SSMS and check what was going on with your favourite diagnostic tools. If you saw a server icon burst into flames on the screen or droop like a jelly, you dropped your mug of coffee to do it.  It was real fun, but I remember it more for the huge difference it made to have that real-time visibility into how your servers are performing. The management soon stopped making jokes about the real reason we wanted the TFT screen. (It rendered DVDs beautifully they said; particularly flesh-tints). If you are instantly alerted when things start to go wrong, then there was a good chance you could fix it before being alerted to the problem by the users of the system.  There is a world of difference between this sort of tool, one that gives whoever is ‘on watch’ in the server room the first warning of a potential problem on one of any number of servers, and the breed of tool that attempts to provide some sort of prosthetic DBA Brain. I like to get the early warning, to get the right information to help to diagnose a problem: No auto-fix, but just the information. I prefer to leave the task of ascertaining the exact cause of a problem to my own routines, custom code, intuition and forensic instincts. A simulated aircraft cockpit doesn’t do anything for me, especially before I know where I should be flying.  Time has moved on, and that TFT screen is now, with SQL Monitor, an iPad or any other mobile or static device that can support a browser. Rather than trying to reproduce the conceptual topology of the servers, it lists them in their groups so as to give a display that scales with the increasing number of databases you monitor.  It gives the history of the major events and trends for the servers. It gives the icons and colours that you can spot out of the corner of your eye, but goes on to give you just enough information in drill-down to give you a much clearer idea of where to look with your DBA tools and routines. It doesn't swamp you with information.  Whereas a few server and database-level problems are pretty easily fixed, others depend on judgement and experience to sort out.  Although the idea of an application that automates the bulk of a DBA’s skills is attractive to many, I can’t see it happening soon. SQL Server’s complexity increases faster than the panaceas can be created. In the meantime, I believe that the best way of helping  DBAs  is to make the monitoring process as simple and effective as possible,  and provide the right sort of detail and ‘evidence’ to allow them to decide on the fix. In the end, it is still down to the skill of the DBA.

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

  • C# / XNA - Load objects to the memory - how it works?

    - by carl
    Hello I'm starting with C# and XNA. In the "Update" method of the "Game" class I have this code: t = Texture2D.FromFile( [...] ); //t is a 'Texture2D t;' which loads small image. "Update" method is working like a loop, so this code is called many times in a second. Now, when I run my game, it takes 95MB of RAM and it goes slowly to about 130MB (due to the code I've posted, without this code it remains at 95MB), then goes immediately to about 100MB (garbare colletion?) and again goes slowly to 130MB, then immediately to 100MB and so on. So my first question: Can You explain why (how) it works like that? I've found, that if I change the code to: t.Dispose() t = Texture2D.FromFile( [...] ); it works like that: first it takes 95MB and then goes slowly to about 101MB (due to the code) and remains at this level. I don't understand why it takes this 6MB (101-95)... ? I want to make it works like that: load image, release from memory, load image, release from memory and so on, so the program should always takes 95MB (it takes 95MB when image is loaded only once in previous method). Whats instructions should I use? If it is important, the size of the image is about 10KB. Thank You!

    Read the article

  • A Knights Tale

    - by Phil Factor
    There are so many lessons to be learned from the story of Knight Capital losing nearly half a billion dollars as a result of a deployment gone wrong. The Knight Capital Group (KCG N) was an American global financial services firm engaging in market making, electronic execution, and institutional sales and trading. According to the recent order (File No.3.15570) against Knight Capital by U.S. Securities and Exchange Commission?, Knight had, for many years used some software which broke up incoming “parent” orders into smaller “child” orders that were then transmitted to various exchanges or trading venues for execution. A tracking ‘cumulative quantity’ function counted the number of ‘child’ orders and stopped the process once the total of child orders matched the ‘parent’ and so the parent order had been completed. Back in the mists of time, some code had been added to it  which was excuted if a particular flag was set. It was called ‘power peg’ and seems to have had a similar design and purpose, but, one guesses, would have shared the same tracking function. This code had been abandoned in 2003, but never deleted. In 2005, The tracking function was moved to an earlier point in the main process. It would seem from the account that, from that point, had that flag ever been set, the old ‘Power Peg’ would have been executed like Godzilla bursting from the ice, making child orders without limit without any tracking function. It wasn’t, presumably because the software that set the flag was removed. In 2012, nearly a decade after ‘Power Peg’ was abandoned, Knight prepared a new module to their software to cope with the imminent Retail Liquidity Program (RLP) for the New York Stock Exchange. By this time, the flag had remained unused and someone made the fateful decision to reuse it, and replace the old ‘power peg’ code with this new RLP code. Had the two actions been done together in a single automated deployment, and the new deployment tested, all would have been well. It wasn’t. To quote… “Beginning on July 27, 2012, Knight deployed the new RLP code in SMARS in stages by placing it on a limited number of servers in SMARS on successive days. During the deployment of the new code, however, one of Knight’s technicians did not copy the new code to one of the eight SMARS computer servers. Knight did not have a second technician review this deployment and no one at Knight realized that the Power Peg code had not been removed from the eighth server, nor the new RLP code added. Knight had no written procedures that required such a review.” (para 15) “On August 1, Knight received orders from broker-dealers whose customers were eligible to participate in the RLP. The seven servers that received the new code processed these orders correctly. However, orders sent with the repurposed flag to the eighth server triggered the defective Power Peg code still present on that server. As a result, this server began sending child orders to certain trading centers for execution. Because the cumulative quantity function had been moved, this server continuously sent child orders, in rapid sequence, for each incoming parent order without regard to the number of share executions Knight had already received from trading centers. Although one part of Knight’s order handling system recognized that the parent orders had been filled, this information was not communicated to SMARS.” (para 16) SMARS routed millions of orders into the market over a 45-minute period, and obtained over 4 million executions in 154 stocks for more than 397 million shares. By the time that Knight stopped sending the orders, Knight had assumed a net long position in 80 stocks of approximately $3.5 billion and a net short position in 74 stocks of approximately $3.15 billion. Knight’s shares dropped more than 20% after traders saw extreme volume spikes in a number of stocks, including preferred shares of Wells Fargo (JWF) and semiconductor company Spansion (CODE). Both stocks, which see roughly 100,000 trade per day, had changed hands more than 4 million times by late morning. Ultimately, Knight lost over $460 million from this wild 45 minutes of trading. Obviously, I’m interested in all this because, at one time, I used to write trading systems for the City of London. Obviously, the US SEC is in a far better position than any of us to work out the failings of Knight’s IT department, and the report makes for painful reading. I can’t help observing, though, that even with the breathtaking mistakes all along the way, that a robust automated deployment process that was ‘all-or-nothing’, and tested from soup to nuts would have prevented the disaster. The report reads like a Greek Tragedy. All the way along one wants to shout ‘No! not that way!’ and ‘Aargh! Don’t do it!’. As the tragedy unfolds, the audience weeps for the players, trapped by a cruel fate. All application development and deployment requires defense in depth. All IT goes wrong occasionally, but if there is a culture of defensive programming throughout, the consequences are usually containable. For financial systems, these defenses are required by statute, and ignored only by the foolish. Knight’s mistakes weren’t made by just one hapless sysadmin, but were progressive errors by an  IT culture spanning at least ten years.  One can spell these out, but I think they’re obvious. One can only hope that the industry studies what happened in detail, learns from the mistakes, and draws the right conclusions.

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • Software Tuned to Humanity

    - by Phil Factor
    I learned a great deal from a cynical old programmer who once told me that the ideal length of time for a compiler to do its work was the same time it took to roll a cigarette. For development work, this is oh so true. After intently looking at the editing window for an hour or so, it was a relief to look up, stretch, focus the eyes on something else, and roll the possibly-metaphorical cigarette. This was software tuned to humanity. Likewise, a user’s perception of the “ideal” time that an application will take to move from frame to frame, to retrieve information, or to process their input has remained remarkably static for about thirty years, at around 200 ms. Anything else appears, and always has, to be either fast or slow. This could explain why commercial applications, unlike games, simulations and communications, aren’t noticeably faster now than they were when I started programming in the Seventies. Sure, they do a great deal more, but the SLAs that I negotiated in the 1980s for application performance are very similar to what they are nowadays. To prove to myself that this wasn’t just some rose-tinted misperception on my part, I cranked up a Z80-based Jonos CP/M machine (1985) in the roof-space. Within 20 seconds from cold, it had loaded Wordstar and I was ready to write. OK, I got it wrong: some things were faster 30 years ago. Sure, I’d now have had all sorts of animations, wizzy graphics, and other comforting features, but it seems a pity that we have used all that extra CPU and memory to increase the scope of what we develop, and the graphical prettiness, but not to speed the processes needed to complete a business procedure. Never mind the weight, the response time’s great! To achieve 200 ms response times on a Z80, or similar, performance considerations influenced everything one did as a developer. If it meant writing an entire application in assembly code, applying every smart algorithm, and shortcut imaginable to get the application to perform to spec, then so be it. As a result, I’m a dyed-in-the-wool performance freak and find it difficult to change my habits. Conversely, many developers now seem to feel quite differently. While all will acknowledge that performance is important, it’s no longer the virtue is once was, and other factors such as user-experience now take precedence. Am I wrong? If not, then perhaps we need a new school of development technique to rival Agile, dedicated once again to producing applications that smoke the rear wheels rather than pootle elegantly to the shops; that forgo skeuomorphism, cute animation, or architectural elegance in favor of the smell of hot rubber. I struggle to name an application I use that is truly notable for its blistering performance, and would dearly love one to do my everyday work – just as long as it doesn’t go faster than my brain.

    Read the article

  • Documentation and Test Assertions in Databases

    - by Phil Factor
    When I first worked with Sybase/SQL Server, we thought our databases were impressively large but they were, by today’s standards, pathetically small. We had one script to build the whole database. Every script I ever read was richly annotated; it was more like reading a document. Every table had a comment block, and every line would be commented too. At the end of each routine (e.g. procedure) was a quick integration test, or series of test assertions, to check that nothing in the build was broken. We simply ran the build script, stored in the Version Control System, and it pulled everything together in a logical sequence that not only created the database objects but pulled in the static data. This worked fine at the scale we had. The advantage was that one could, by reading the source code, reach a rapid understanding of how the database worked and how one could interface with it. The problem was that it was a system that meant that only one developer at the time could work on the database. It was very easy for a developer to execute accidentally the entire build script rather than the selected section on which he or she was working, thereby cleansing the database of everyone else’s work-in-progress and data. It soon became the fashion to work at the object level, so that programmers could check out individual views, tables, functions, constraints and rules and work on them independently. It was then that I noticed the trend to generate the source for the VCS retrospectively from the development server. Tables were worst affected. You can, of course, add or delete a table’s columns and constraints retrospectively, which means that the existing source no longer represents the current object. If, after your development work, you generate the source from the live table, then you get no block or line comments, and the source script is sprinkled with silly square-brackets and other confetti, thereby rendering it visually indigestible. Routines, too, were affected. In our system, every routine had a directly attached string of unit-tests. A retro-generated routine has no unit-tests or test assertions. Yes, one can still commit our test code to the VCS but it’s a separate module and teams end up running the whole suite of tests for every individual change, rather than just the tests for that routine, which doesn’t scale for database testing. With Extended properties, one can get the best of both worlds, and even use them to put blame, praise or annotations into your VCS. It requires a lot of work, though, particularly the script to generate the table. The problem is that there are no conventional names beyond ‘MS_Description’ for the special use of extended properties. This makes it difficult to do splendid things such ensuring the integrity of the build by running a suite of tests that are actually stored in extended properties within the database and therefore the VCS. We have lost the readability of database source code over the years, and largely jettisoned the use of test assertions as part of the database build. This is not unexpected in view of the increasing complexity of the structure of databases and number of programmers working on them. There must, surely, be a way of getting them back, but I sometimes wonder if I’m one of very few who miss them.

    Read the article

  • Separating text strings into a table of individual words in SQL via XML.

    - by Phil Factor
    p.MsoNormal {margin-top:0cm; margin-right:0cm; margin-bottom:10.0pt; margin-left:0cm; line-height:115%; font-size:11.0pt; font-family:"Calibri","sans-serif"; } Nearly nine years ago, Mike Rorke of the SQL Server 2005 XML team blogged ‘Querying Over Constructed XML Using Sub-queries’. I remember reading it at the time without being able to think of a use for what he was demonstrating. Just a few weeks ago, whilst preparing my article on searching strings, I got out my trusty function for splitting strings into words and something reminded me of the old blog. I’d been trying to think of a way of using XML to split strings reliably into words. The routine I devised turned out to be slightly slower than the iterative word chop I’ve always used in the past, so I didn’t publish it. It was then I suddenly remembered the old routine. Here is my version of it. I’ve unwrapped it from its obvious home in a function or procedure just so it is easy to appreciate. What it does is to chop a text string into individual words using XQuery and the good old nodes() method. I’ve benchmarked it and it is quicker than any of the SQL ways of doing it that I know about. Obviously, you can’t use the trick I described here to do it, because it is awkward to use REPLACE() on 1…n characters of whitespace. I’ll carry on using my iterative function since it is able to tell me the location of each word as a character-offset from the start, and also because this method leaves punctuation in (removing it takes time!). However, I can see other uses for this in passing lists as input or output parameters, or as return values.   if exists (Select * from sys.xml_schema_collections where name like 'WordList')   drop XML SCHEMA COLLECTION WordList go create xml schema collection WordList as ' <xs:schema xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="words">        <xs:simpleType>               <xs:list itemType="xs:string" />        </xs:simpleType> </xs:element> </xs:schema>'   go   DECLARE @string VARCHAR(MAX) –we'll get some sample data from the great Ogden Nash Select @String='This is a song to celebrate banks, Because they are full of money and you go into them and all you hear is clinks and clanks, Or maybe a sound like the wind in the trees on the hills, Which is the rustling of the thousand dollar bills. Most bankers dwell in marble halls, Which they get to dwell in because they encourage deposits and discourage withdrawals, And particularly because they all observe one rule which woe betides the banker who fails to heed it, Which is you must never lend any money to anybody unless they don''t need it. I know you, you cautious conservative banks! If people are worried about their rent it is your duty to deny them the loan of one nickel, yes, even one copper engraving of the martyred son of the late Nancy Hanks; Yes, if they request fifty dollars to pay for a baby you must look at them like Tarzan looking at an uppity ape in the jungle, And tell them what do they think a bank is, anyhow, they had better go get the money from their wife''s aunt or ungle. But suppose people come in and they have a million and they want another million to pile on top of it, Why, you brim with the milk of human kindness and you urge them to accept every drop of it, And you lend them the million so then they have two million and this gives them the idea that they would be better off with four, So they already have two million as security so you have no hesitation in lending them two more, And all the vice-presidents nod their heads in rhythm, And the only question asked is do the borrowers want the money sent or do they want to take it withm. Because I think they deserve our appreciation and thanks, the jackasses who go around saying that health and happi- ness are everything and money isn''t essential, Because as soon as they have to borrow some unimportant money to maintain their health and happiness they starve to death so they can''t go around any more sneering at good old money, which is nothing short of providential. '   –we now turn it into XML declare @xml_data xml(WordList)  set @xml_data='<words>'+ replace(@string,'&', '&amp;')+'</words>'    select T.ref.value('.', 'nvarchar(100)')  from (Select @xml_data.query('                      for $i in data(/words) return                      element li { $i }               '))  A(list) cross apply A.List.nodes('/li') T(ref)     …which gives (truncated, of course)…

    Read the article

  • At times, you need to hire a professional.

    - by Phil Factor
    After months of increasingly demanding toil, the development team I belonged to was told that the project was to be canned and the whole team would be fired.  I’d been brought into the team as an expert in the data implications of a business re-engineering of a major financial institution. Nowadays, you’d call me a data architect, I suppose.  I’d spent a happy year being paid consultancy fees solving a succession of interesting problems until the point when the company lost is nerve, and closed the entire initiative. The IT industry was in one of its characteristic mood-swings downwards.  After the announcement, we met in the canteen. A few developers had scented the smell of death around the project already hand had been applying unsuccessfully for jobs. There was a sense of doom in the mass of dishevelled and bleary-eyed developers. After giving vent to anger and despair, talk turned to getting new employment. It was then that I perked up. I’m not an obvious choice to give advice on getting, or passing,  IT interviews. I reckon I’ve failed most of the job interviews I’ve ever attended. I once even failed an interview for a job I’d already been doing perfectly well for a year. The jobs I’ve got have mostly been from personal recommendation. Paradoxically though, from years as a manager trying to recruit good staff, I know a lot about what IT managers are looking for.  I gave an impassioned speech outlining the important factors in getting to an interview.  The most important thing, certainly in my time at work is the quality of the résumé or CV. I can’t even guess the huge number of CVs (résumés) I’ve read through, scanning for candidates worth interviewing.  Many IT Developers find it impossible to describe their  career succinctly on two sides of paper.  They leave chunks of their life out (were they in prison?), get immersed in detail, put in irrelevancies, describe what was going on at work rather than what they themselves did, exaggerate their importance, criticize their previous employers, aren’t  aware of the important aspects of a role to a potential employer, suffer from shyness and modesty,  and lack any sort of organized perspective of their work. There are many ways of failing to write a decent CV. Many developers suffer from the delusion that their worth can be recognized purely from the code that they write, and shy away from anything that seems like self-aggrandizement. No.  A resume must make a good impression, which means presenting the facts about yourself in a clear and positive way. You can’t do it yourself. Why not have your resume professionally written? A good professional CV Writer will know the qualities being looked for in a CV and interrogate you to winkle them out. Their job is to make order and sense out of a confused career, to summarize in one page a mass of detail that presents to any recruiter the information that’s wanted. To stand back and describe an accurate summary of your skills, and work-experiences dispassionately, without rancor, pity or modesty. You are no more capable of producing an objective documentation of your career than you are of taking your own appendix out.  My next recommendation was more controversial. This is to have a professional image overhaul, or makeover, followed by a professionally-taken photo portrait. I discovered this by accident. It is normal for IT professionals to face impossible deadlines and long working hours by looking more and more like something that had recently blocked a sink. Whilst working in IT, and in a state of personal dishevelment, I’d been offered the role in a high-powered amateur production of an old ex- Broadway show, purely for my singing voice. I was supposed to be the presentable star. When the production team saw me, the air was thick with tension and despair. I was dragged kicking and protesting through a succession of desperate grooming, scrubbing, dressing, dieting. I emerged feeling like “That jewelled mass of millinery, That oiled and curled Assyrian bull, Smelling of musk and of insolence.” (Tennyson Maud; A Monodrama (1855) Section v1 stanza 6) I was then photographed by a professional stage photographer.  When the photographs were delivered, I was amazed. It wasn’t me, but it looked somehow respectable, confident, trustworthy.   A while later, when the show had ended, I took the photos, and used them for work. They went with the CV to job applications. It did the trick better than I could ever imagine.  My views went down big with the developers. Old rivalries were put immediately to one side. We voted, with a show of hands, to devote our energies for the entire notice period to getting employable. We had a team sourcing the CV Writer,  a team organising the make-overs and photographer, and a third team arranging  mock interviews. A fourth team determined the best websites and agencies for recruitment, with the help of friends in the trade.  Because there were around thirty developers, we were in a good negotiating position.  Of the three CV Writers we found who lived locally, one proved exceptional. She was an ex-journalist with an eye to detail, and years of experience in manipulating language. We tried her skills out on a developer who seemed a hopeless case, and he was called to interview within a week.  I was surprised, too, how many companies were experts at image makeovers. Within the month, we all looked like those weird slick  people in the ‘Office-tagged’ stock photographs who stare keenly and interestedly at PowerPoint slides in sleek chromium-plated high-rise offices. The portraits we used still adorn the entries of many of my ex-colleagues in LinkedIn. After a months’ worth of mock interviews, and technical Q&A, our stutters, hesitations, evasions and periphrastic circumlocutions were all gone.  There is little more to relate. With the résumés or CVs, mugshots, and schooling in how to pass interviews, we’d all got new and better-paid jobs well  before our month’s notice was ended. Whilst normally, an IT team under the axe is a sad and depressed place to belong to, this wonderful group of people had proved the power of organized group action in turning the experience to advantage. It left us feeling slightly guilty that we were somehow cheating, but I guess we were merely leveling the playing-field.

    Read the article

  • How to Load external Div that has a dynamic content using ajax and jsp/servlets ?

    - by A.S al-shammari
    I need to use ajax feature to load external div element ( external jsp file) into the current page. That JSP page contains a dynamic content - e.g. content that is based on values received from the current session. I solved this problem , but I'm in doubt because I think that my solution is bad , or maybe there is better solution since I'm not expert. I have three files: Javascript function that is triggered when a element is clicked, it requests html data from a servlet: $("#inboxtable tbody tr").click(function(){ var trID = $(this).attr('id'); $.post("event?view=read",{id:trID}, function(data){ $("#eventContent").html(data); // load external file },"html"); // type }); The servlet "event" loads the data and generates HTML content using include method : String id = request.getParameter("id"); if (id != null) { v.add("Test"); v.add(id); session.setAttribute("readMessageVector", v); request.getRequestDispatcher("readMessage.jsp").include(request, response); } The readMessage jsp file looks like this: <p> Text: ${readMessageVector[0]} </p> <p> ID: ${readMessageVector[1]} </p> My questions Is this solution good enough to solve this problem - loading external jsp that has dynamic content ? Is there better solution ?

    Read the article

  • Disaster Recovery Example

    Previously, I use to work for a small internet company that sells dental plans online. Our primary focus concerning disaster prevention and recovery is on our corporate website and private intranet site. We had a multiphase disaster recovery plan that includes data redundancy, load balancing, and off-site monitoring. Data redundancy is a key aspect of our disaster recovery plan. The first phase of this is to replicate our data to multiple database servers and schedule daily backups of the databases that are stored off site. The next phase is the file replication of data amongst our web servers that are also backed up daily by our collocation. In addition to the files located on the server, files are also stored locally on development machines, and again backed up using version control software. Load balancing is another key aspect of our disaster recovery plan. Load balancing offers many benefits for our system, better performance, load distribution and increased availability. With our servers behind a load balancer our system has the ability to accept multiple requests simultaneously because the load is split between multiple servers. Plus if one server is slow or experiencing a failure the traffic is diverted amongst the other servers connected to the load balancer allowing the server to get back online. The final key to our disaster recovery plan is off-site monitoring that notifies all IT staff of any outages or errors on the main website encountered by the monitor. Messages are sent by email, voicemail, and SMS. According to Disasterrecovery.org, disaster recovery planning is the way companies successfully manage crises with minimal cost and effort and maximum speed compared to others that are forced to make decision out of desperation when disasters occur. In addition Sun Guard stated in 2009 that the first step in disaster recovery planning is to analyze company risks and factor in fixed costs for things like hardware, software, staffing and utilities, as well as indirect costs, such as floor space, power protection, physical and information security, and management. Also availability requirements need to be determined per application and system as well as the strategies for recovery.

    Read the article

  • Word forms with too many ActiveX checkboxes load slowly.

    - by Luke
    Hi there, my company's software product has a feature that allows users to generate forms from Word templates. The program auto fills some fields from the SQL database and the user can fill in other data that they desire. So we have a .dotx template that holds the design of the form, and then the user gets the .docx file to fill out when they call it from our program. The problem we're having is that some of our users have been finding that the forms take an exceptionally long time to open up and then, once open, are so slow to respond (scroll around, etc) that they're unusable. So in my investigations so far, I've found out that the problem systems are one with lower powered CPUs (unfortunately it happens for systems above our system requirements) and the Word forms that cause the problems are ones with large amount of ActiveX style checkboxes on them. I verified that reducing the ActiveX checkboxes fixes the form loading problems. So I have the following questions about solutions (we're using Word 2007): 1) Is there any way to configure Word, or some other settings, so that there won't be such a strain opening a Word form with lots of ActiveX checkboxes? Any way of speeding up Word's opening? 2) Using Legacy style checkboxes instead of the ActiveX ones makes the forms load fine, but it looks like the user has to double-click the checkbox and change Default Value-Checked. Is there a way to configure it so that they can simply click on the checkbox to tick it? "Legacy Forms" checkbox as a name kind of worries me (Legacy…), does that mean a future version of word at some point wouldn't load the checkboxes because they're "legacy"? 3) Yes, it became clear to me after a little bit of research into solutions that Word is not the tool for the job for forms like I'm describing. InfoPath seems to be exactly what we should have been using all along but unfortunately I wasn't involved in the decision making or development of these forms, just tasked with coming up with a solution. I'd appreciate answers to any of these, or if anyone has any other ideas for solutions to this problem. Thanks

    Read the article

  • When things go awry

    - by Phil Factor
    The moment the Entrepreneur opened his mouth on prime-time national TV, spelled out the URL and waxed big on how exciting ‘his’ new website was, I knew I was in for a busy night. I’d designed and built it. All at once, half a million people tried to log into the website. Although all my stress-testing paid off, I have to admit that the network locked up tight long before there was any danger of a database or website problem. Soon afterwards, the Entrepreneur and the Big Boss were there in the autopsy meeting. We picked through all our systems in detail to see how they’d borne the unexpected strain. Mercifully, in view of the sour mood of the Big Boss, it turned out that the only thing we could have done better was buy a bigger pipe to and from the internet. We’d specified that ‘big pipe’ when designing the system. The Big Boss had then railed at the cost and so we’d subsequently compromised. I felt that my design decisions were vindicated. The Big Boss brooded for a while. Then he made the significant comment: “What really ****** me off is the fact that, for ten minutes, we couldn’t take people’s money.” At that point I stopped feeling smug. Had the internet connection been better, the system would have reached its limit and failed rather precipitously, and that wasn’t what he wanted. Then it occurred to me that what had gummed up the connection was all those images on the site, that had made it so impressive for the visitors. If there had been a way to automatically pare down the site to the bare essentials under stress… Hmm. I began to consider disaster-recovery in the broadest sense – maintaining a service in spite of unusual or unexpected events. What he said makes a lot of sense: sacrifice whatever isn’t essential to keep the core service running when we approach the capacity limits. Maybe in IT we should borrow (or revive) the business concept of the ‘Skeleton service’, maintaining only the priority parts under stress, using a process that is well-prepared and carefully rehearsed. How might this work? Whatever the event we have to prepare for, it is all about understanding the priorities; knowing what one can dispense with when the going gets tough. In the event of database disaster, it’s much faster to deploy a skeletal system with only the essential data than to restore the entire system, though there would have to be a reconciliation process to update the revived database retrospectively, once the emergency was over. It isn’t just the database that could be designed for resilience. One could prepare for unusually high traffic in a website by designing a system that degraded gradually to a ‘skeletal’ site, one that maintained the commercial essentials without fat images, JavaScript libraries and razzmatazz. This is all what the Big Boss scathingly called ‘a mere technicality’. It seems to me that what is needed first is a culture of application and database design which acknowledges that we live in a very imperfect world, and react accordingly when things go awry.

    Read the article

  • Rake db:migrate returns "rake aborted! no such file to load -- spec"

    - by Isaac Yerushalmi
    For some reason, out of no where, rails began giving me an error on "rake db:migrate", and I can no longer run migrations. It returns the error "no such file to load -- spec /home/ti/rails_apps/appname/Rakefile:10" I've spent two hours searching google for answers, trying to figure this out, but to no avail. What could be the problem? Here is the trace: -jailshell-3.2$ rake db:migrate --trace (in /home/ti/rails_apps/teamisrael) rake aborted! no such file to load -- spec /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:156:in `require' /home/ti/rails_apps/teamisrael/vendor/plugins/google-geocoder/tasks/rspec.rake:5 /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:in `load_without_new_constant_marking' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:521:in `new_constants_in' /usr/lib/ruby/gems/1.8/gems/activesupport-2.3.5/lib/active_support/dependencies.rb:145:in `load' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/rails.rb:7 /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/rails.rb:7:in `each' /usr/lib/ruby/gems/1.8/gems/rails-2.3.5/lib/tasks/rails.rb:7 /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `gem_original_require' /usr/lib/ruby/site_ruby/1.8/rubygems/custom_require.rb:31:in `require' /home/ti/rails_apps/teamisrael/Rakefile:10 /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2349:in `load' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2349:in `raw_load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1985:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2036:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1984:in `load_rakefile' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1969:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:2036:in `standard_exception_handling' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/lib/rake.rb:1967:in `run' /usr/lib/ruby/gems/1.8/gems/rake-0.8.3/bin/rake:31 /usr/local/bin/rake:19:in `load' /usr/local/bin/rake:19

    Read the article

  • Input/output (read) errors in Bacula while setting up a Tape Drive + Autochanger

    - by Kyle Brandt
    When running the label barcode command in bacula I am getting Input/output errors. I am just getting started in trying to set this up: Connecting to Storage daemon TapeDevice at ny-back01.ny.stackoverflow.com:9103 ... Sending label command for Volume "ACJ332" Slot 1 ... 3307 Issuing autochanger "unload slot 8, drive 0" command. 3304 Issuing autochanger "load slot 1, drive 0" command. 3305 Autochanger "load slot 1, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ332" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ332", Slot 1 successfully created. Sending label command for Volume "ACJ331" Slot 2 ... 3307 Issuing autochanger "unload slot 1, drive 0" command. 3304 Issuing autochanger "load slot 2, drive 0" command. 3305 Autochanger "load slot 2, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ331" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ331", Slot 2 successfully created. Sending label command for Volume "ACJ328" Slot 3 ... 3307 Issuing autochanger "unload slot 2, drive 0" command. 3304 Issuing autochanger "load slot 3, drive 0" command. 3305 Autochanger "load slot 3, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ328" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ328", Slot 3 successfully created. Sending label command for Volume "ACJ329" Slot 4 ... 3307 Issuing autochanger "unload slot 3, drive 0" command. 3304 Issuing autochanger "load slot 4, drive 0" command. 3305 Autochanger "load slot 4, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ329" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ329", Slot 4 successfully created. Sending label command for Volume "ACJ335" Slot 5 ... 3307 Issuing autochanger "unload slot 4, drive 0" command. 3304 Issuing autochanger "load slot 5, drive 0" command. 3305 Autochanger "load slot 5, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ335" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ335", Slot 5 successfully created. Sending label command for Volume "ACJ334" Slot 6 ... 3307 Issuing autochanger "unload slot 5, drive 0" command. 3304 Issuing autochanger "load slot 6, drive 0" command. 3305 Autochanger "load slot 6, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ334" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ334", Slot 6 successfully created. Sending label command for Volume "ACJ333" Slot 7 ... 3307 Issuing autochanger "unload slot 6, drive 0" command. 3304 Issuing autochanger "load slot 7, drive 0" command. 3305 Autochanger "load slot 7, drive 0", status is OK. block.c:1010 Read error on fd=5 at file:blk 0:0 on device "ULTRIUM-HH4" (/dev/st0). ERR=Input/output error. 3000 OK label. VolBytes=64512 DVD=0 Volume="ACJ333" Device="ULTRIUM-HH4" (/dev/st0) Catalog record for Volume "ACJ333", Slot 7 successfully created. Sending label command for Volume "ACJ330" Slot 8 ... 3307 Issuing autochanger "unload slot 7, drive 0" command. Bacula-dir: # Definition of file storage device Storage { Name = TapeDevice # Do not use "localhost" here Address = ny-back01.... # N.B. Use a fully qualified name here SDPort = 9103 Password = "..." Device = ULTRIUM-HH4 Media Type = LTO-4 Media Type = File Autochanger = Yes } Bacula-sd: Autochanger { Name = StorageLoader1U Device = ULTRIUM-HH4 Changer Command = "/etc/bacula/scripts/mtx-changer %c %o %S %a %d" Changer Device = /dev/sg5 } Device { Name = ULTRIUM-HH4 Media Type = LTO-4 Archive Device = /dev/st0 AutomaticMount = yes; AlwaysOpen = yes; RemovableMedia = yes; RandomAccess = no; AutoChanger = yes; RandomAccess = no; } Anyone knows what this means / why I am getting this?

    Read the article

  • What's the most efficient way to load data from a file to a collection on-demand?

    - by Dan
    I'm working on a java project that will allows users to parse multiple files with potentially thousands of lines. The information parsed will be stored in different objects, which then will be added to a collection. Since the GUI won't require to load ALL these objects at once and keep them in memory, I'm looking for an efficient way to load/unload data from files, so that data is only loaded into the collection when a user requests it. I'm just evaluation options right now. I've also thought of the case where, after loading a subset of the data into the collection, and presenting it on the GUI, the best way to reload the previously observed data. Re-run the parser/Populate collection/Populate GUI? or probably find a way to keep the collection into memory, or serialize/deserialize the collection itself? I know that loading/unloading subsets of data can get tricky if some sort of data filtering is performed. Let's say that I filter on ID, so my new subset will contain data from two previous analyzed subsets. This would be no problem is I keep a master copy of the whole data in memory. I've read that google-collections are good and efficient when handling big amounts of data, and offer methods that simplify lots of things so this might offer an alternative to allow me to keep the collection in memory. This is just general talking. The question on what collection to use is a separate and complex thing. Do you know what's the general recommendation on this type of task? I'd like to hear what you've done with similar scenarios. I can provide more specifics if needed.

    Read the article

  • IsNumeric() Broken? Only up to a point.

    - by Phil Factor
    In SQL Server, probably the best-known 'broken' function is poor ISNUMERIC() . The documentation says 'ISNUMERIC returns 1 when the input expression evaluates to a valid numeric data type; otherwise it returns 0. ISNUMERIC returns 1 for some characters that are not numbers, such as plus (+), minus (-), and valid currency symbols such as the dollar sign ($).'Although it will take numeric data types (No, I don't understand why either), its main use is supposed to be to test strings to make sure that you can convert them to whatever numeric datatype you are using (int, numeric, bigint, money, smallint, smallmoney, tinyint, float, decimal, or real). It wouldn't actually be of much use anyway, since each datatype has different rules. You actually need a RegEx to do a reasonably safe check. The other snag is that the IsNumeric() function  is a bit broken. SELECT ISNUMERIC(',')This cheerfully returns 1, since it believes that a comma is a currency symbol (not a thousands-separator) and you meant to say 0, in this strange currency.  However, SELECT ISNUMERIC(N'£')isn't recognized as currency.  '+' and  '-' is seen to be numeric, which is stretching it a bit. You'll see that what it allows isn't really broken except that it doesn't recognize Unicode currency symbols: It just tells you that one numeric type is likely to accept the string if you do an explicit conversion to it using the string. Both these work fine, so poor IsNumeric has to follow suit. SELECT  CAST('0E0' AS FLOAT)SELECT  CAST (',' AS MONEY) but it is harder to predict which data type will accept a '+' sign. SELECT  CAST ('+' AS money) --0.00SELECT  CAST ('+' AS INT)   --0SELECT  CAST ('+' AS numeric)/* Msg 8115, Level 16, State 6, Line 4 Arithmetic overflow error converting varchar to data type numeric.*/SELECT  CAST ('+' AS FLOAT)/*Msg 8114, Level 16, State 5, Line 5Error converting data type varchar to float.*/> So we can begin to say that the maybe IsNumeric isn't really broken, but is answering a silly question 'Is there some numeric datatype to which i can convert this string? Almost, but not quite. The bug is that it doesn't understand Unicode currency characters such as the euro or franc which are actually valid when used in the CAST function. (perhaps they're delaying fixing the euro bug just in case it isn't necessary).SELECT ISNUMERIC (N'?23.67') --0SELECT  CAST (N'?23.67' AS money) --23.67SELECT ISNUMERIC (N'£100.20') --1SELECT  CAST (N'£100.20' AS money) --100.20 Also the CAST function itself is quirky in that it cannot convert perfectly reasonable string-representations of integers into integersSELECT ISNUMERIC('200,000')       --1SELECT  CAST ('200,000' AS INT)   --0/*Msg 245, Level 16, State 1, Line 2Conversion failed when converting the varchar value '200,000' to data type int.*/  A more sensible question is 'Is this an integer or decimal number'. This cuts out a lot of the apparent quirkiness. We do this by the '+E0' trick. If we want to include floats in the check, we'll need to make it a bit more complicated. Here is a small test-rig. SELECT  PossibleNumber,         ISNUMERIC(CAST(PossibleNumber AS NVARCHAR(20)) + 'E+00') AS Hack,        ISNUMERIC (PossibleNumber + CASE WHEN PossibleNumber LIKE '%E%'                                          THEN '' ELSE 'E+00' END) AS Hackier,        ISNUMERIC(PossibleNumber) AS RawIsNumericFROM    (SELECT CAST(',' AS NVARCHAR(10)) AS PossibleNumber          UNION SELECT '£' UNION SELECT '.'         UNION SELECT '56' UNION SELECT '456.67890'         UNION SELECT '0E0' UNION SELECT '-'         UNION SELECT '-' UNION SELECT '.'         UNION  SELECT N'?' UNION SELECT N'¢'        UNION  SELECT N'?' UNION SELECT N'?34.56'         UNION SELECT '-345' UNION SELECT '3.332228E+09') AS examples Which gives the result ... PossibleNumber Hack Hackier RawIsNumeric-------------- ----------- ----------- ------------? 0 0 0- 0 0 1, 0 0 1. 0 0 1¢ 0 0 1£ 0 0 1? 0 0 0?34.56 0 0 00E0 0 1 13.332228E+09 0 1 1-345 1 1 1456.67890 1 1 156 1 1 1 I suspect that this is as far as you'll get before you abandon IsNumeric in favour of a regex. You can only get part of the way with the LIKE wildcards, because you cannot specify quantifiers. You'll need full-blown Regex strings like these ..[-+]?\b[0-9]+(\.[0-9]+)?\b #INT or REAL[-+]?\b[0-9]{1,3}\b #TINYINT[-+]?\b[0-9]{1,5}\b #SMALLINT.. but you'll get even these to fail to catch numbers out of range.So is IsNumeric() an out and out rogue function? Not really, I'd say, but then it would need a damned good lawyer.

    Read the article

  • Been asked a dozen times, but no luck from what I've read. Prevent Anchor Jumping on page load

    - by jasenmp
    I'm currently working with WP theme that can be found here: sanjay.dmediastudios.com I'm currently using 'smooth scroll' on my page, I'm attempting to have the page smoothly scroll to the requested section when coming from an external link (for instance coming from the blog page takes you to sanjay.dmediastudios.com/#portfolio) from there I want the page to start at the top and THEN scroll to the portfolio section. What's happening is it briefly displays the 'portfolio section' (anchor jump) and THEN resets to the top and scrolls down. It's driving me nuts :(. Here is the code I'm using: Click function for smooth scroll: $(function() { $('.menu li a').click(function() { if (location.pathname.replace(/^\//, '') == this.pathname.replace(/^\//, '') && location.hostname == this.hostname) { var target = $(this.hash); target = target.length ? target : $('[name=' + this.hash.slice(1) + ']'); if (target.length) { $root.animate({ scrollTop: target.offset().top - 75 }, 800, 'swing'); return false; } } }); //end of click function }); The page load function: $(window).on("load", function() { if (location.hash) { // do the test straight away window.scrollTo(0, 0); // execute it straight away setTimeout(function() { window.scrollTo(0, 0); // run it a bit later also for browser compatibility }, 1); } var urlHash = window.location.href.split("#")[1]; if (urlHash && $('#' + urlHash).length) $('html,body').animate({ scrollTop: $('#' + urlHash).offset().top - 75 }, 800, 'swing'); }); Any help would be MUCH appreciated.

    Read the article

  • Peculiar JRE behaviour running RMI server under load, should I worry?

    - by darri
    I've been developing a minimalistic Java rich client CRUD application framework for the past few years, mostly as a hobby but also actively using it to write applications for my current employer. The framework provides database access to clients either via a local JDBC based connection or a lightweight RMI server. Last night I started a load testing application, which ran 100 headless clients, bombarding the server with requests, each client waiting only 1 - 2 seconds between running simple use cases, consisting of selecting records along with associated detail records from a simple e-store database (Chinook). This morning when I looked at the telemetry results from the server profiling session I noticed something which to me seemed strange (and made me keep the setup running for the remainder of the day), I don't really know what conclusions to draw from it. Here are the results: Memory GC activity Threads CPU load Interesting, right? So the question is, is this normal or erratic? Is this simply the JRE (1.6.0_03 on Windows XP) doing it's thing (perhaps related to the JRE configuration) or is my framework design somehow causing this? Running the server against MySQL as opposed to an embedded H2 database does not affect the pattern. I am leaving out the details of my server design, but I'll be happy to elaborate if this behaviour is deemed erratic.

    Read the article

  • Getting the hang of CodeIgniter - Templating / loading views

    - by kilrizzy
    Attempting to learn CI and going through the docs to get a better understanding. Without getting a separate library, I could make a template by including a list of views like so: $this->load->view('header'); $this->load->view('navigation'); $this->load->view('sidenav_open'); $this->load->view('blocks/userinfo'); $this->load->view('blocks/stats'); $this->load->view('sidenav_close'); $this->load->view('content',$data); $this->load->view('footer'); This makes sense but would I actually have that on each of my controllers (pages)? Not sure if there is a way to include this in the initial controller (welcome) and then in the others somehow reference it? Or perhaps there is something I am missing completely

    Read the article

  • How to load src in html5 Video tag from JavaScript?

    - by luis_laurent
    Here is the thing, I'm working with ASP.NET and I'm using this particular SignalR library to broadcast a video to my clients, but as far as I know I can not stream video because signalR is a messaging system and it is not intended to stream video, files or things like that. Now what I am trying to do is to split the video into buffers that is Base64-encoded into a string, then on the client I am decoding this string and I am trying to load it into the source of a Video tag. Here I show you what I am doing on the client: HTML Code: <video id="myVideo"> <source id="video_source"> </video> Javascript Code: //here somehow I am getting the string with the base64-encoded video function playVideo(message) { var myVideo = document.getElementById("myVideo"); var mySource = document.getElementById("video_source"); mySource.setAttribute("src", getEncodedVideoString("avi", message)); myVideo.load(); myVideo.play(); }; // here I am formatting and concatenating the string for my source attribute function getEncodedVideoString(type, file) { return 'data:video/' + type + ';base64,' + $.base64.decode(file); } Well as you can see, at the moment I'm facing a scenario a little weird, but I already have the video encoded on the client, now I just need to find out the way to reproduce that video. And here is when my question comes up, does any one has done something like this before, or you have any idea or suggestion to do that? P.S I am using this jquery-base64 library to decode my string

    Read the article

  • Mandatory profile on Terminal server fails to load. Userenv.log debug

    - by Datapimp23
    Hi, We're having a lot of corrupted profiles lately on our profile share. At the moment I have no clue why, but I decided to switch to one mandatory profile since the users can all use the same and there is no need to have seperate profiles for each user. Here's what I did. I logged into the Terminal server with a new user and configured some stuff (imported certificates and a few files). Then I logged out. Later as admin I copied the profile to another server and renamed it to bsilo. I made sure the user hive settings were adjusted. Everyone had access to the hive. I shared the bsilo folder with full access for everyone. I set the NTFS permissions to read, read & execute, list folder contents for domain users. I also renamed NTUSER.DAT to NTUSER.MAN. Now I set a env variable %manprofile% on the Terminal server that points to \server\bsilo\ntuser.man I set the env var as terminal services profile path for a test user. When I log in I as the user get the following output The system cannot find the path specified. Can somebody point me in the right direction. Thanks USERENV(1774.d18) 15:52:39:724 InitializePolicyProcessing: Initialised Machine Mutex/Events USERENV(1774.d18) 15:52:39:724 InitializePolicyProcessing: Initialised User Mutex/Events USERENV(1774.d18) 15:52:39:724 LibMain: Process Name: \??\C:\WINDOWS\system32\winlogon.exe USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Yes, we can impersonate the user. Running as self USERENV(1774.d18) 15:52:48:005 ========================================================= USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Entering, hToken = <0x340, lpProfileInfo = 0x6e5d8 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-dwFlags = <0x0 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpUserName = USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpProfilePath = <\server\bsilo\ntuser.man USERENV(1774.d18) 15:52:48:005 LoadUserProfile: lpProfileInfo-lpDefaultPath = <\BDPINF5\netlogon\Default User USERENV(1774.d18) 15:52:48:005 LoadUserProfile: NULL server name USERENV(1774.d18) 15:52:48:005 LoadUserProfile: no thread token found, impersonating self. USERENV(1774.d18) 15:52:48:005 GetInterface: Returning rpc binding handle USERENV(218.2f94) 15:52:48:005 IProfileSecurityCallBack: client authenticated. USERENV(218.2f94) 15:52:48:005 DropClientContext: Got client token 000009B4, sid = S-1-5-18 USERENV(218.2f94) 15:52:48:005 MIDL_user_allocate enter USERENV(218.2f94) 15:52:48:005 DropClientContext: load profile object successfully made USERENV(218.2f94) 15:52:48:005 DropClientContext: Returning 0 USERENV(1774.d18) 15:52:48:005 LoadUserProfile: Calling DropClientToken (as self) succeeded USERENV(1774.d18) 15:52:48:005 CProfileDialog::Initialize : Cookie generated USERENV(1774.d18) 15:52:48:005 CProfileDialog::Initialize : Endpoint generated USERENV(218.1f38) 15:52:48:005 IProfileSecurityCallBack: client authenticated. USERENV(218.1f38) 15:52:48:020 LoadUserProfileI: RPC end point IProfileDialog_9D36D6DD48F0578A2A41B23D7A982E63 USERENV(218.1f38) 15:52:48:020 In LoadUserProfileP USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Running as client, sid = S-1-5-18 USERENV(218.1f38) 15:52:48:020 ========================================================= USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Entering, hToken = <0x98c, lpProfileInfo = 0x9c940 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-dwFlags = <0x0 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpUserName = USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpProfilePath = <\server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:020 LoadUserProfile: lpProfileInfo-lpDefaultPath = <\BDPINF5\netlogon\Default User USERENV(218.1f38) 15:52:48:020 LoadUserProfile: NULL server name USERENV(218.1f38) 15:52:48:020 LoadUserProfile: User sid: S-1-5-21-807756564-1922302612-1565739477-22627 USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock: No existing entry found USERENV(218.1f38) 15:52:48:020 CSyncManager::EnterLock: New entry created USERENV(218.1f38) 15:52:48:020 CHashTable::HashAdd: S-1-5-21-807756564-1922302612-1565739477-22627 added in bucket 11 USERENV(218.1f38) 15:52:48:020 LoadUserProfile: Wait succeeded. In critical section. USERENV(218.1f38) 15:52:48:864 GetOldSidString: Failed to open profile profile guid key with error 2 USERENV(218.1f38) 15:52:48:864 GetProfileSid: No Guid - Sid Mapping available USERENV(218.1f38) 15:52:48:864 TestIfUserProfileLoaded: return with error 2. USERENV(218.1f38) 15:52:48:864 GetOldSidString: Failed to open profile profile guid key with error 2 USERENV(218.1f38) 15:52:48:864 GetProfileSid: No Guid - Sid Mapping available USERENV(218.1f38) 15:52:48:864 LoadUserProfile: Expanded profile path is \server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:880 ParseProfilePath: Entering, lpProfilePath = <\server\bsilo\ntuser.man USERENV(218.1f38) 15:52:48:880 CheckXForestLogon: checking x-forest logon, user handle = 2444 USERENV(218.1f38) 15:52:48:880 CheckXForestLogon: policy set to disable XForest check USERENV(218.1f38) 15:52:48:880 ParseProfilePath: Mandatory profile (.man extension) USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: Try to bypass CSC USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: tried NPAddConnection3ForCSCAgent. Error 2109 USERENV(218.1f38) 15:52:49:239 AbleToBypassCSC: Share \server\bsilo mapped to drive E. Returned Path E:\ntuser.man USERENV(218.1f38) 15:52:49:239 ParseProfilePath: CSC bypassed. Profile path E:\ntuser.man USERENV(218.1f38) 15:52:49:255 ParseProfilePath: Tick Count = 0 USERENV(218.1f38) 15:52:49:255 ParseProfilePath: GetFileAttributes found something with attributes <0x2022 USERENV(218.1f38) 15:52:49:255 ParseProfilePath: Found a file USERENV(218.1f38) 15:52:49:255 ReportError: Impersonating user. USERENV(218.1f38) 15:52:49:255 ReportError: Logging Error DETAIL - The system cannot find the path specified. USERENV(218.1f38) 15:52:49:255 GetInterface: Returning rpc binding handle USERENV(218.1f38) 15:52:49:255 ReportError: RPC End point IProfileDialog_9D36D6DD48F0578A2A41B23D7A982E63 USERENV(218.1f38) 15:52:49:255 ReportError: waiting on rpc async event USERENV(1774.2398) 15:52:49:255 ErrorDialogEx: Calling DialogBoxParam USERENV(1774.2398) 15:52:49:270 ErrorDlgProc:: DialogBoxParam USERENV(218.1f38) 15:52:52:177 RpcAsyncCompleteCall finished, status = 0 USERENV(218.1f38) 15:52:52:177 ReleaseInterface: Releasing rpc binding handle USERENV(218.1f38) 15:52:52:177 LoadUserProfile: ParseProfilePath returned FALSE USERENV(218.1f38) 15:52:52:177 CancelCSCBypassedConnection: Cancelling connection of E: USERENV(218.1f38) 15:52:52:177 CancelCSCBypassedConnection: Connection deleted. USERENV(218.1f38) 15:52:52:177 CSyncManager::LeaveLock USERENV(218.1f38) 15:52:52:192 CSyncManager::LeaveLock: Lock released USERENV(218.1f38) 15:52:52:192 CHashTable::HashDelete: S-1-5-21-807756564-1922302612-1565739477-22627 deleted USERENV(218.1f38) 15:52:52:192 CSyncManager::LeaveLock: Lock deleted USERENV(218.1f38) 15:52:52:192 LoadUserProfile: 003 About Reverted back to user <00000000 USERENV(218.1f38) 15:52:52:192 LoadUserProfile: Leaving with a value of 0. USERENV(218.1f38) 15:52:52:192 ========================================================= USERENV(218.1f38) 15:52:52:192 LoadUserProfileI: LoadUserProfileP failed with 3 USERENV(218.1f38) 15:52:52:192 LoadUserProfileI: returning 3 USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Running as self USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Calling LoadUserProfileI failed. err = 3 USERENV(218.200c) 15:52:52:192 IProfileSecurityCallBack: client authenticated. USERENV(218.200c) 15:52:52:192 ReleaseClientContext: Releasing context USERENV(218.200c) 15:52:52:192 ReleaseClientContext_s: Releasing context USERENV(218.200c) 15:52:52:192 MIDL_user_free enter USERENV(1774.d18) 15:52:52:192 ReleaseInterface: Releasing rpc binding handle USERENV(1774.d18) 15:52:52:192 LoadUserProfile: Returning FALSE. Error = 3

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >