Search Results

Search found 30014 results on 1201 pages for 'go yourself'.

Page 147/1201 | < Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >

  • PHP W3 Validator API, Is this good? [closed]

    - by Josh Purcell
    I was trying to find a way to see if my site's code was valid or not but I continuously going over to W3 Validator so I decided to make an "API" however it really isn't! I just wanted to know if anybody can find a better solution to the one I have made. This is what I currently use, with the usage of ?uri=http://www.mydomain.com : <?php if(!$_GET['uri']) { echo "No URI!"; } else { $CheckURI = "http://validator.w3.org/check?uri=".$_GET['uri']; $URL = file_get_contents($CheckURI); $Start = strpos($URL, "<title>") + 7; $End = strpos($URL, "</title>"); $Title = substr($URL, $Start, $End-$Start); if(preg_match('[Invalid]',$Title)) { //Code is INVALID echo "<a href='$CheckURI' title='This is not good!' target='_BLANK'>INVALID Source</a>"; } elseif(preg_match('[Valid]',$Title)) { //Code is VALID echo "<a href='$CheckURI' title='Check It Yourself!' target='_BLANK'>Valid Source</a>"; } else { //It Went WRONG echo ""; } }

    Read the article

  • Reaping the Benefits of the Image Packaging System

    - by rickramsey
    source One of the promises made about Oracle Solaris 11 was easier installation. Remember? Do you also remember how involved installing Oracle Solaris Cluster used to be? It was so involved, in fact, that we (when we were Sun Microsystems) wouldn't even let you do it yourself. How times have changed. New - How to Automate The Installation of Oracle Solaris Cluster 4.0 Thanks to the new image packaging architecture in Oracle Solaris 11, you can now automate the installation of Oracle Solaris Cluster 4.0. Why is that such a big deal? As Lucia Lai explains it: "Without the AI, you would have to manually install the cluster components on the cluster nodes, and then run the scinstall tool to add the nodes to the cluster. If, instead, you use the AI, both the Oracle Solaris 11 and the Oracle Solaris Cluster 4.0 packages are installed onto the cluster nodes directly from Image Packaging System (IPS) repositories, and the nodes are booted into a new cluster with minimum user intervention." Lucia goes on to explain how to set up and configure the AI server, how to plan your cluster configuration for the automated installation, how to use the scinstall utility, how to set up the DHCP server, and more. A thorough, well-written article. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Click buttons on the mouse stopped working in 12.10

    - by Kushal
    everything was great for a couple weeks after the upgrade, and then all of a sudden, the click buttons on my trackpad (as well as any other USB mouse I would hook up) stopped working. The pointer moves fine, but the clicks don't work. Sometimes the left click doesn't work but right click does, and then some times, neither works. I noticed this would begin when I would accidentally drag some text in a web browser (you know how when you try to move your pointer through the trackpad, but you accidentally tap down and it starts to drag whatever text you've selected on the window), and then you're done. The clicks won't work after that. They would work upon rebooting or logging off and back on, but then after a few minutes of usage, things would go back to being broken again. It happened a LOT when I was trying to play Scrabble on Facebook. I've raised a bug for this, but I haven't heard back anything on it. Here's the bug report: https://bugs.launchpad.net/bugs/1077805 Since the system was unusable this way, I had to remove it and install another OS based on 12.04. Has anyone else faced this issue or does someone know what to do to fix it? I'd go back to vanilla Ubuntu in a heartbeat if this issue can be fixed.

    Read the article

  • Suggestions for a Live chat software on websites for customer support?

    - by Munish Goyal
    Recommendations needed. We want to get in touch with customers via live chat. Requirements: chat window customisable to mingle with website theme (colors etc) preferably the window should be within webpage and not only pop-out/popup. ease of use by customer minimally intrusive should have triggers/Alerts to backend side. for ex: user is unable to fill-up signup form or something, we should be able to offer help to user and this chat window automatically shows to user. What is the cost ? UPDATE: After R&D we also narrowed down to comm100 and liveperson, and we will go with comm100. LP is best commerical soln. but comm100 is a good free soln. We go with comm100 as starting point. But it gives out exceptions alerts sometimes in the dashboard. Can it be configured for chat invitations triggered on certain conditions. Currently only a timebased trigger is available. Any other major difference between these two ?

    Read the article

  • Does the term "Learning Curve" include the knowing of the gotchas?

    - by voroninp
    When you learn new technology you spend time understanding its concepts and tools. But when technology meets real life strange and not pleasant things happen. Reuqirements are often far from ideal and differ from 'classic' scenario. And soon I find myself bending the technology to my real needs. At this point I begin to know bugs of the system or that is is not so flexible as it seemed at the very begining. And this 'fighting' with technology consumes a great part of the time while developing. What is more depressing is that the bunch of such gotchas and workarounds are not concentrated at one place (book, site, etc.) And before you really confront it you cannot really ask the correct question because you do not even suspect the reason for the problem to occur (unknown-unknown). So my question consiststs of three: 1) Do you really manage (and how) to predict possible future problems? 2) How much time do you spend for finding the workaround/fix/solution before you leave it and switch to other problems. 3) What are the criteria for you to think about yourself as experienced in the tecnology. Do you take these gotchas into account?

    Read the article

  • Storing User-uploaded Images

    - by Nyxynyx
    What is the usual practice for handling user uploaded photos and storing them on the database and server? For a user profile image: After receiving the image file from user, rename file to <image_id>_<username> Move image to /images/userprofile Add img filename to a table users containing their profile details like first_name, last_name, age, gender, birthday For a image for a review done by user: After receiving the image file from user, rename file to <image_id>_<review_id> Move image to /images/reviews Add img filename to a table reviews containing their profile details like review_id, review_content, user_id, score. Question 1: How should I go about storing the image filenames if the user can upload multiple photos for a particular review? Serialize? Question 2: Or have another table review_images with columns review_id, image_id, image_filename just for tracking images? Will doing a JOIN when retriving the image_filename from this table slow down performance noticeably? Question 3: Should all the images be stored in a single folder? Will there be a problem when we have 100K photos in the same folder? Is there a more efficient way to go about doing this?

    Read the article

  • Database Migration Scripts: Getting from place A to place B

    - by Phil Factor
    We’ll be looking at a typical database ‘migration’ script which uses an unusual technique to migrate existing ‘de-normalised’ data into a more correct form. So, the book-distribution business that uses the PUBS database has gradually grown organically, and has slipped into ‘de-normalisation’ habits. What’s this? A new column with a list of tags or ‘types’ assigned to books. Because books aren’t really in just one category, someone has ‘cured’ the mismatch between the database and the business requirements. This is fine, but it is now proving difficult for their new website that allows searches by tags. Any request for history book really has to look in the entire list of associated tags rather than the ‘Type’ field that only keeps the primary tag. We have other problems. The TypleList column has duplicates in there which will be affecting the reporting, and there is the danger of mis-spellings getting there. The reporting system can’t be persuaded to do reports based on the tags and the Database developers are complaining about the unCoddly things going on in their database. In your version of PUBS, this extra column doesn’t exist, so we’ve added it and put in 10,000 titles using SQL Data Generator. /* So how do we refactor this database? firstly, we create a table of all the tags. */IF  OBJECT_ID('TagName') IS NULL OR OBJECT_ID('TagTitle') IS NULL  BEGIN  CREATE TABLE  TagName (TagName_ID INT IDENTITY(1,1) PRIMARY KEY ,     Tag VARCHAR(20) NOT NULL UNIQUE)  /* ...and we insert into it all the tags from the list (remembering to take out any leading spaces */  INSERT INTO TagName (Tag)     SELECT DISTINCT LTRIM(x.y.value('.', 'Varchar(80)')) AS [Tag]     FROM     (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )  /* we can then use this table to provide a table that relates tags to articles */  CREATE TABLE TagTitle   (TagTitle_ID INT IDENTITY(1, 1),   [title_id] [dbo].[tid] NOT NULL REFERENCES titles (Title_ID),   TagName_ID INT NOT NULL REFERENCES TagName (Tagname_ID)   CONSTRAINT [PK_TagTitle]       PRIMARY KEY CLUSTERED ([title_id] ASC, TagName_ID)       ON [PRIMARY])        CREATE NONCLUSTERED INDEX idxTagName_ID  ON  TagTitle (TagName_ID)  INCLUDE (TagTitle_ID,title_id)        /* ...and it is easy to fill this with the tags for each title ... */        INSERT INTO TagTitle (Title_ID, TagName_ID)    SELECT DISTINCT Title_ID, TagName_ID      FROM        (SELECT  Title_ID,          CONVERT(XML, '<list><i>' + REPLACE(TypeList, ',', '</i><i>') + '</i></list>')          AS XMLkeywords          FROM   dbo.titles)g    CROSS APPLY XMLkeywords.nodes('/list/i/text()') AS x ( y )    INNER JOIN TagName ON TagName.Tag=LTRIM(x.y.value('.', 'Varchar(80)'))    END    /* That's all there was to it. Now we can select all titles that have the military tag, just to try things out */SELECT Title FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE tagname.tag='Military'/* and see the top ten most popular tags for titles */SELECT Tag, COUNT(*) FROM titles  INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID  INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  GROUP BY Tag ORDER BY COUNT(*) DESC/* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID So we’ve refactored our PUBS database without pain. We’ve even put in a check to prevent it being re-run once the new tables are created. Here is the diagram of the new tag relationship We’ve done both the DDL to create the tables and their associated components, and the DML to put the data in them. I could have also included the script to remove the de-normalised TypeList column, but I’d do a whole lot of tests first before doing that. Yes, I’ve left out the assertion tests too, which should check the edge cases and make sure the result is what you’d expect. One thing I can’t quite figure out is how to deal with an ordered list using this simple XML-based technique. We can ensure that, if we have to produce a list of tags, we can get the primary ‘type’ to be first in the list, but what if the entire order is significant? Thank goodness it isn’t in this case. If it were, we might have to revisit a string-splitter function that returns the ordinal position of each component in the sequence. You’ll see immediately that we can create a synchronisation script for deployment from a comparison tool such as SQL Compare, to change the schema (DDL). On the other hand, no tool could do the DML to stuff the data into the new table, since there is no way that any tool will be able to work out where the data should go. We used some pretty hairy code to deal with a slightly untypical problem. We would have to do this migration by hand, and it has to go into source control as a batch. If most of your database changes are to be deployed by an automated process, then there must be a way of over-riding this part of the data synchronisation process to do this part of the process taking the part of the script that fills the tables, Checking that the tables have not already been filled, and executing it as part of the transaction. Of course, you might prefer the approach I’ve taken with the script of creating the tables in the same batch as the data conversion process, and then using the presence of the tables to prevent the script from being re-run. The problem with scripting a refactoring change to a database is that it has to work both ways. If we install the new system and then have to rollback the changes, several books may have been added, or had their tags changed, in the meantime. Yes, you have to script any rollback! These have to be mercilessly tested, and put in source control just in case of the rollback of a deployment after it has been in place for any length of time. I’ve shown you how to do this with the part of the script .. /* and if you still want your list of tags for each title, then here they are */SELECT title_ID, title, STUFF(  (SELECT ','+tagname.tag FROM titles thisTitle    INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID    INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID  WHERE ThisTitle.title_id=titles.title_ID  FOR XML PATH(''), TYPE).value('.', 'varchar(max)')  ,1,1,'')    FROM titles  ORDER BY title_ID …which would be turned into an UPDATE … FROM script. UPDATE titles SET  typelist= ThisTaglistFROM     (SELECT title_ID, title, STUFF(    (SELECT ','+tagname.tag FROM titles thisTitle      INNER JOIN TagTitle ON titles.title_ID=TagTitle.Title_ID      INNER JOIN Tagname ON Tagname.TagName_ID=TagTitle.TagName_ID    WHERE ThisTitle.title_id=titles.title_ID    ORDER BY CASE WHEN tagname.tag=titles.[type] THEN 1 ELSE 0  END DESC    FOR XML PATH(''), TYPE).value('.', 'varchar(max)')    ,1,1,'')  AS ThisTagList  FROM titles)fINNER JOIN Titles ON f.title_ID=Titles.title_ID You’ll notice that it isn’t quite a round trip because the tags are in a different order, though we’ve managed to make sure that the primary tag is the first one as originally. So, we’ve improved the database for the poor book distributors using PUBS. It is not a major deal but you’ve got to be prepared to provide a migration script that will go both forwards and backwards. Ideally, database refactoring scripts should be able to go from any version to any other. Schema synchronization scripts can do this pretty easily, but no data synchronisation scripts can deal with serious refactoring jobs without the developers being able to specify how to deal with cases like this.

    Read the article

  • Sponsored Giveaway: Free Copies of WinX DVD Copy Pro for All How-To Geek Readers

    - by The Geek
    Have you ever wanted to make a backup of a DVD, or even rip it to an ISO file to use on your computer without the original optical disc? You can use WinX DVD Copy Pro to make this happen, and we’ve got a giveaway for all HTG readers. To get your free copy, just click through the following link to download and get the license code, as long as you download it by December 20th. In addition, an iPhone / iPad Video Software Pack will be presented as the second round gift from December 21st to January 2nd, 2013. For Windows users: http://www.winxdvd.com/giveaway/ WinX DVD Copy Pro has many features, including this list, which we copied straight from their site: Supports latest released DVDs. Protect your DVD disc from damage. Copy DVD to DVD, ISO image, etc. 9 advanced DVD backup schemes. Support Disney’s Fake, scratched DVDs and Sony ARccOS bad sector. Secure Yourself by Using Two-Step Verification on These 16 Web Services How to Fix a Stuck Pixel on an LCD Monitor How to Factory Reset Your Android Phone or Tablet When It Won’t Boot

    Read the article

  • Tumblr custom domain not redirecting properly

    - by Manic
    I decided to host my blog at Tumblr, using their custom domain setup (http://blog.smokingfishgames.com/ instead of http://smokingfishgames.tumblr.com). However, it's been 72 hours and I'm still getting spotty redirection. It works some of the time--I go and see the page and blog, and it's all fine. However, it occasionally just stops working and redirects back to my web host, which is a directory with nothing but a single file called BUGGER.html (which I stuck in to make sure that it was my web host and not some Tumblr empty directory). Clearing the Chrome DNS cache makes the problem go away--for a while. After a few minutes, or an hour, or however long, I'll start seeing BUGGER.html again. I clear the cache, and poof, the blog shows up. The thing that's curious to me is that when I clear the cache and get BUGGER.html again (which happens occasionally), I can look at my Chrome DNS cache and see assets.tumblr.com UNSPECIFIED blog.smokingfishgames.com UNSPECIFIED www.tumblr.com UNSPECIFIED IP addresses and expiration times omitted for brevity's sake--if they're important I'm sure I can replicate the issue. This implies, to me anyway, that my browser is reaching Tumblr but getting bounced back to my web host. Any reason why this would be happening, or is this a normal symptom of DNS propagation? If it is a problem, should I be bothering Tumblr or my host with it, or is this something I can fix myself?

    Read the article

  • Strange resizing of partition after reinstalling Ubuntu 14.04 64bit

    - by Mike
    I started with Windows 7 on 120GB SSD and Ubuntu 14.04 32bit installed on 60GB partition on separate 1TB HDD. I just did a fresh reinstall of 14.04 64bit on the 1TB HDD. In the installation set up process, I selected the second choice of "deleting Ubuntu 14.04 and all it's files,documents, photos etc and reinstalling" to what I figured would reinstall the 64bit OS on the already existing 60GB allocated partition. Instead, it reinstalled Ubuntu as 43.5 GB and created a separate 15.8 partition. So now it reads that my disk space for Ubuntu ( in settingsdetails) is 43.5GB (instead of the previous 60GB that my old 32bit had) The upside is I can now access my 1TB HDD from my toolbar(and all the files located on it) Before, I could only access that through Windows (I can also access the SSD too, but that was always the case) Both drives are mounted now. My initial reaction was to go into Windows 7 disk management delete the strange/new 15gb partitionextend the 43.5 to the unallocated space. But I'm not sure if this is necessary or would even work. My question is why did it create a 15gb partition shrinking my ubuntu disk space, and is it useful? I don't want wasted space, so before I go through all my set up of Ubuntu, should I change this. At this time my HDD reads as 43.5 partiton, 15.8 partition, and 874GB exfat32 (939GB total)

    Read the article

  • How do you support your code post employment end?

    - by James
    What is the process for leaving a company (or even a group/division) in terms of code support? Is it best to handle all questions? Do you give the remaining developers access to yourself as a future resource? If so, is there a way to not give full access? I've experienced first hand where answers about the general software arthitecture from the initial developer would be invaluable. I understand that if serious assistance is needed, than it becomes a typical case of employment negotiation as a support contract. However, should serious assistance be required, what steps can you make to ease that process of contacting you? I was thinking of doing something like making a (YOUR_NAME)_codesupport @ (YOUR_FAVORITE_EMAIL_CLIENT).com address. My Situation Specifics: I'm a co-op student, and as such bounce around companies on 4-month stints. This means introducing myself to a lot of new code bases, as well as leaving a fair share of orphaned code behind when I leave a company. I feel bad if I leave junk code around.

    Read the article

  • Almost working 2D Collisions

    - by TheGag96
    I'm terribly sorry I'm asking this question YET AGAIN, but I can almost guarantee that this will be the last time I'll have to ask. I'm currently on the verge of FINALLY getting these collisions to work for my game, made with libGDX in Java. My collisions use the same method as (and are basically copied and modified code from) the XNA Platformer example (here) where the direction of the collision is based on the rectangle where two objects are overlapping. The collisions themselves almost work perfectly, but for some reason, holding down/up and left and colliding with the floor/ceiling while doing so doesn't seem to work well. I'm not at all sure why. Instead of vaguely giving my problem and snippets of code, I've decided to instead provide a binary and the source of the game I have so far so you can see for yourself what my problem is. Link. (Note: make sure you unzip everything into a folder somewhere or it will not work) You'll find the collision code in the method workingCollisions() in Link.java. Please excuse the messy code and terrible graphics as this whole thing is in pre-pre-alpha. If anyone is kind enough and helps me out here, you are the best person ever. I'm completely desperate; I've been trying this on and off for months and I just can't get it to work. I cannot thank you enough.

    Read the article

  • Automatic syncing USB HD drive to local HD (different computers)

    - by luri
    I have a desktop PC at work, and a laptop at home.... or elsewhere. What I want to do is use a USB HD to store my documents (about 130 Gb, maybe more). That would serve as backup and also port my files to my laptop. I'd like any of both computers to automatically sync all files there with local copies, so that I can work at either of them and keep updated copies of everything in both (plus the USB drive, which would allow me to work in other computers, too, apart from being another backup). Dropbox ins't a solution for me, due to pricing and 100Gb limit. The workflow would be as follows, to clear things up: I work on PC1. Changes in files are automatically synced to USB whenever a file is modified. I go home and boot PC2. I plug the usb drive and local files are synced (if changed) with the most recent usb copies. While I work at PC2, again, changes in files spread to my USB drive. Whenever I go to PC1 again, I plug my usb and again everything synces. So the questions would be: a) Am I crazy? b) Can it be done? c) Will I have any file conflicts (provided I'm the only one that will modify the files)?

    Read the article

  • What to do when TDD tests reveal new functionality that is needed that also needs tests?

    - by Joshua Harris
    What do you do when you are writing a test and you get to the point where you need to make the test pass and you realize that you need an additional piece of functionality that should be separated into its own function? That new function needs to be tested as well, but the TDD cycle says to Make a test fail, make it pass then refactor. If I am on the step where I am trying to make my test pass I'm not supposed to go off and start another failing test to test the new functionality that I need to implement. For example, I am writing a point class that has a function WillCollideWith(LineSegment): public class Point { // Point data and constructor ... public bool CollidesWithLine(LineSegment lineSegment) { Vector PointEndOfMovement = new Vector(Position.X + Velocity.X, Position.Y + Velocity.Y); LineSegment pointPath = new LineSegment(Position, PointEndOfMovement); if (lineSegment.Intersects(pointPath)) return true; return false; } } I was writing a test for CollidesWithLine when I realized that I would need a LineSegment.Intersects(LineSegment) function. But, should I just stop what I am doing on my test cycle to go create this new functionality? That seems to break the "Red, Green, Refactor" principle. Should I just write the code that detects that lineSegments Intersect inside of the CollidesWithLine function and refactor it after it is working? That would work in this case since I can access the data from LineSegment, but what about in cases where that kind of data is private?

    Read the article

  • Finding a job: feedback on my current prodicument and my relentless efforts [closed]

    - by Michael
    I am a very well rounded IT guy, with a passion for programming in particular. I have been through a BS program less the internship as I refuse to go lower than minimum wage (i.e. free) and/or such opportunities find the institution lacking. Since then, the catalog for my degree has changed, so I am making peace without it. I go up for job interviews and get myself no farther than a submitted resume. I have even changed my strategy and made this website: http://goo.gl/qqpN8 to showcase my highlights (but not at all exhaustive) for specific areas in the US and paid for classified ads. The 'internship' is notable as I am just trying to get my foot in the door. Because IT is so vast, with programming and engineering in particular, I spend my time researching the requirements for the job I am applying for. This has made me that well rounded guy I spoke of earlier, but has made me a victim of being a jack-of-all-trades; unfortunately being a master of none. I embrace my gung-ho attitude and find it to be a trait that has powered my career before. But I am starting to lose steam. I want it straight. What is not appealing about everything I am doing? What are the technologies that I need to focus on that are in great demand at the moment?

    Read the article

  • New CAM Editor v2.3 with Open-XDX for Open Data APIs

    - by drrwebber
    Creating actual working XML exchanges, loading data from data stores, generating XML, testing, integrating with web services and then deployment delivery takes a lot of coding and effort. Then writing the documentation, models, schema and doing naming and design rule (NDR) checks and packaging all this together (such as for NIEM IEPD use). What if there was a tool that helped you do all that easily and simply? Welcome to the new Open-XDX and the CAM Editor! Open-XDX uses code-free techniques in combination with CAM templates and visual drag and drop to rapidly design your XML exchange. Then Open-XDX will automatically generate all the SQL for you, read the database data, generate and populate the valid output XML, and filter with parameters. To complete the processing solution Open-XDX works with web services and JDBC database connections as a callable module that can be deployed plug and play with your middleware stack, all with just a few lines of Java code (about 5 actually). You can build either Query/Response or Publish/Subscribe services from existing data stores to XML literally in minutes. To see a demonstration of using Open-XDX, a MySQL data store and integrating with Oracle Web Logic server please see this short few minutes video - http://youtube.com/user/TheCameditor There is also a Quick Guide available that provides more technical insights along with a sample pack download of templates and SQL that you can try for yourself. Head on over to our project resource site to learn more, download the latest CAM Editor and see links to all the resources and materials. We look forward to seeing how the developer community is able to jump start information sharing initiatives using this new innovative approach.

    Read the article

  • Html 5 clock, part ii - CSS marker classes and getElementsByClassName

    - by Norgean
    The clock I made in part i displays the time in "long" - "It's a quarter to ten" (but in Norwegian). To save space, some letters are shared, "sevenineight" is four letters shorter than "seven nine eight". We only want to highlight the "correct" parts of this, for example "sevenineight". When I started programming the clock, each letter had its own unique ID, and my script would "get" each element individually, and highlight / hide each element according to some obscure logic. I quickly realized, despite being in a post surgery haze, …this is a stupid way to do it. And, to paraphrase NPH, if you find yourself doing something stupid, stop, and be awesome instead. We want an easy way to get all the items we want to highlight. Perhaps we can use the new getElementsByClassName function? Try to mark each element with a classname or two. So in "sevenineight": 's' is marked as 'h7', and the first 'n' is marked with both 'h7' and 'h9' (h for hour). <div class='h7 h9'>N</div><div class='h9'>I</div> getElementsByClassName('h9') will return the four letters of "nine". Notice that these classes are not backed by any CSS, they only appear directly in html (and are used in javascript). I have not seen classes used this way elsewhere, and have chosen to call them "marker classes" - similar to marker interfaces - until somebody comes up with a better name.

    Read the article

  • Will having multiple domains improve my seo?

    - by Anonymous12345
    Lets say I have a domain already, for example www.automobile4u.com (not mine), with a website fully running and all. The title of my "Website" says: <title>Used cars - buy and sell your used cars here</title> Also, lets say I have fully SEO the website so when people searching for the term buy used cars, I end up on the second or first page. Now, I want to end up higher, so I go to the google adwords page where you can check how many searches are made on specific terms. Lets say the term "used cars" has 20 million searches each month. Here comes the question, could I just go and buy that domain with the search terms adress, in this case www.usedcars.com and make a redirect to my original page, and this way when people search for "used cars", my newly bought domain name comes up redirecting people to my original website (www.automobile4u.com)? The reason I believe this benefits me is because it seems search engines first of all check website adresses matching the search, so the query "used cars" would automatically bring www.usedcars.com to the first result right? What are the downsides for this? I already know about google spiders not liking redirects, but there are many methods of redirecting... Is this a good idea generally?

    Read the article

  • How do i approach this collision model?

    - by PeeS
    this is the game level prototype i have already implemented. It has few objects per room to allow me to finally add some collision detection/response code into it. VIDEO As you can probably see, every object inside has it's own AABB, even the room itself has AABB. So a player is like 'inside the Room AABB'. My player will be exactly inside the room, so he would have to collide correctly with those AABBs, so that when he hits any of those objects inside he get's a proper collision response from those AABB's. Now i would like to hear from you what kind of collision approach should i choose in here? How do i approach this kind of stuff: AABB to AABB collision detection then when this is positive go with AABB - Tri to find proper plane normal and calculate response ? AABB to AABB then when positive go with AABB - AABB Side check to find proper proper plane normal and calculate response? Anything else? How do you do this ? Many thanks.

    Read the article

  • How do I set up a proxy server in Xubuntu?

    - by Dolphin
    NOTICE: I am completely allowed to do this. This is not meant for accessing sites I am not supposed to go on I know that most threads with "Proxy" in the title have to do with getting to sites they're not supposed to go on. I hope that I can get this to be as easy as possible. I hope it can be as easy as setting up an OpenSSH server. What happened My dad has just screwed something up in our router settings, and I can't access Google anymore because if it. I need to be able to access Google. He doesn't like me in the router because he thinks I will make it worse. (I am sick of Bing and Yahoo) So I want to set up a proxy server that will allow me to work around what he screwed up. What I want to do I want to host it on a computer in my grandpa's house, and access it with my computer at my dad's house. It's running Xubuntu. I want it to work with the Mozilla Firefox on my computer at my dad's house running Ubuntu 11.10. How can I set this up? This is only temporary until he fixes whatever he did.

    Read the article

  • Being rocked...

    - by ZacHarlan
    After almost four and half years, I finally escaped from the world of telemarketing.     I'm now at a place that writes really good code, values testing, does routine code reviews, collaborates with each other so continuously and effectively somebody should make a documentary about it!   Today alone, I had two really smart and well respected developers go line by line through my code and show me how to make it better.  Seriously, people pay really good money for something like this and they don't get near the quality of feedback as I got!     +1 for me finally getting to a point in my career where i get to work with some of the best of the best in the software world!   I've been rocked by the fact that places like this actually exist.     I've been Rocked by the sheer size, complexity and simplicity of our website.     Most importantly I've been ROCKED by the fact that this many smart people check their egos at the door, gel together and look for ways to make software better than how they found it.  This is how to grow a business with tech... hire great people and watch them go!   Seriously, bravo.

    Read the article

  • How do I fix the HDMI/DVI display output with Intel HD 4000 Graphics in 12.04?

    - by YumYumYum
    I have an Alienware Dell PC with Intel HD 4000 Graphics (Ivy Bridge) as verified by the output of lspci | grep VGA posted below. 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) The PC only has HDMI and DVI display outputs and using the HDMI output I am only being offered abnormal resolutions. As you can see below it does not even list HDMI1 or DVI1 but just only a fallback. $ export DISPLAY=:0.0 && xrandr xrandr: Failed to get size of gamma for output default Screen 0: minimum 640 x 480, current 1360 x 768, maximum 1360 x 768 default connected 1360x768+0+0 0mm x 0mm 1360x768 0.0* 1024x768 0.0 800x600 0.0 640x480 0.0 How can I fix this? Does it just need to be configured differently or will I need to use a newer kernel (as Intel Graphics drivers are included in the kernel)? Follow up: kernel to latest Step 1: Go to: http://kernel.ubuntu.com/~kernel-ppa/mainline/ Go to last: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/ Download: http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-headers-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-headers-3.6.0-030600rc3_3.6.0-030600rc3.201208221735_all.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-image-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb http://kernel.ubuntu.com/~kernel-ppa/mainline/v3.6-rc3-quantal/linux-image-extra-3.6.0-030600rc3-generic_3.6.0-030600rc3.201208221735_amd64.deb Step 2: sudo dpkg -i linux*.deb Step 3: reboot which shows that i have Ubuntu 12.04 with latest $ uname -a Linux sun-Alienware-X51 3.6.0-030600rc3-generic #201208221735 SMP Wed Aug 22 21:36:32 UTC 2012 x86_64 x86_64 x86_64 GNU/Linux But still same problem remain.

    Read the article

  • Should Developers Perform All Tasks or Should They Specialize?

    - by Bob Horn
    Disclaimer: The intent of this question isn't to discern what is better for the individual developer, but for the system as a whole. I've worked in environments where small teams managed certain areas. For example, there would be a small team for every one of these functions: UI Framework code Business/application logic Database I've also worked on teams where the developers were responsible for all of these areas and more (QA, analsyt, etc...). My current environment promotes agile development (specifically scrum) and everyone has their hands in every area mentioned above. While there are pros and cons to each approach, I'd be curious to know if there are more pros and cons than I list below, and also what the generally feeling is about which approach is better. Devs Do It All Pros 1. Developers may be more well-rounded 2. Developers know more of the system Cons 1. Everyone has their hands in all areas, increasing the probability of creating less-than-optimal results in that area 2. It can take longer to do something with which you are unfamiliar (jack of all trades, master of none) Devs Specialize Pros 1. Developers can create policies and procedures for their area of expertise and more easily enforce them 2. Developers have more of a chance to become deeply knowledgeable about their specific area and make it the best it can be 3. Other developers don't cross boundaries and degrade another area Cons 1. As one colleague put it: "Why would you want to pigeon-hole yourself like that?" (Meaning some developers won't get a chance to work in certain areas.) It's easy to say how wonderful agile is, and that we should do it all, but I'm somewhat of a fan of having areas of expertise. Without that expertise, I've seen code degrade, database schemas become difficult to manage, hack UI code, etc... Let's face it, some people make careers out of doing just UI work, or just database work. It's not that easy to just fill in and do as good of a job as an expert in that area.

    Read the article

  • Faster, Simpler access to Azure Tables with Enzo Azure API

    - by Herve Roggero
    After developing the latest version of Enzo Cloud Backup I took the time to create an API that would simplify access to Azure Tables (the Enzo Azure API). At first, my goal was to make the code simpler compared to the Microsoft Azure SDK. But as it turns out it is also a little faster; and when using the specialized methods (the fetch strategies) it is much faster out of the box than the Microsoft SDK, unless you start creating complex parallel and resilient routines yourself. Last but not least, I decided to add a few extension methods that I think you will find attractive, such as the ability to transform a list of entities into a DataTable. So let’s review each area in more details. Simpler Code My first objective was to make the API much easier to use than the Azure SDK. I wanted to reduce the amount of code necessary to fetch entities, remove the code needed to add automatic retries and handle transient conditions, and give additional control, such as a way to cancel operations, obtain basic statistics on the calls, and control the maximum number of REST calls the API generates in an attempt to avoid throttling conditions in the first place (something you cannot do with the Azure SDK at this time). Strongly Typed Before diving into the code, the following examples rely on a strongly typed class called MyData. The way MyData is defined for the Azure SDK is similar to the Enzo Azure API, with the exception that they inherit from different classes. With the Azure SDK, classes that represent entities must inherit from TableServiceEntity, while classes with the Enzo Azure API must inherit from BaseAzureTable or implement a specific interface. // With the SDK public class MyData1 : TableServiceEntity {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } //  With the Enzo Azure API public class MyData2 : BaseAzureTable {     public string Message { get; set; }     public string Level { get; set; }     public string Severity { get; set; } } Simpler Code Now that the classes representing an Azure Table entity are defined, let’s review the methods that the Azure SDK would look like when fetching all the entities from an Azure Table (note the use of a few variables: the _tableName variable stores the name of the Azure Table, and the ConnectionString property returns the connection string for the Storage Account containing the table): // With the Azure SDK public List<MyData1> FetchAllEntities() {      CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);      CloudTableClient tableClient = storageAccount.CreateCloudTableClient();      TableServiceContext serviceContext = tableClient.GetDataServiceContext();      CloudTableQuery<MyData1> partitionQuery =         (from e in serviceContext.CreateQuery<MyData1>(_tableName)         select new MyData1()         {            PartitionKey = e.PartitionKey,            RowKey = e.RowKey,            Timestamp = e.Timestamp,            Message = e.Message,            Level = e.Level,            Severity = e.Severity            }).AsTableServiceQuery<MyData1>();        return partitionQuery.ToList();  } This code gives you automatic retries because the AsTableServiceQuery does that for you. Also, note that this method is strongly-typed because it is using LINQ. Although this doesn’t look like too much code at first glance, you are actually mapping the strongly-typed object manually. So for larger entities, with dozens of properties, your code will grow. And from a maintenance standpoint, when a new property is added, you may need to change the mapping code. You will also note that the mapping being performed is optional; it is desired when you want to retrieve specific properties of the entities (not all) to reduce the network traffic. If you do not specify the properties you want, all the properties will be returned; in this example we are returning the Message, Level and Severity properties (in addition to the required PartitionKey, RowKey and Timestamp). The Enzo Azure API does the mapping automatically and also handles automatic reties when fetching entities. The equivalent code to fetch all the entities (with the same three properties) from the same Azure Table looks like this: // With the Enzo Azure API public List<MyData2> FetchAllEntities() {        AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);        List<MyData2> res = at.Fetch<MyData2>("", "Message,Level,Severity");        return res; } As you can see, the Enzo Azure API returns the entities already strongly typed, so there is no need to map the output. Also, the Enzo Azure API makes it easy to specify the list of properties to return, and to specify a filter as well (no filter was provided in this example; the filter is passed as the first parameter).  Fetch Strategies Both approaches discussed above fetch the data sequentially. In addition to the linear/sequential fetch methods, the Enzo Azure API provides specific fetch strategies. Fetch strategies are designed to prepare a set of REST calls, executed in parallel, in a way that performs faster that if you were to fetch the data sequentially. For example, if the PartitionKey is a GUID string, you could prepare multiple calls, providing appropriate filters ([‘a’, ‘b’[, [‘b’, ‘c’[, [‘c’, ‘d[, …), and send those calls in parallel. As you can imagine, the code necessary to create these requests would be fairly large. With the Enzo Azure API, two strategies are provided out of the box: the GUID and List strategies. If you are interested in how these strategies work, see the Enzo Azure API Online Help. Here is an example code that performs parallel requests using the GUID strategy (which executes more than 2 t o3 times faster than the sequential methods discussed previously): public List<MyData2> FetchAllEntitiesGUID() {     AzureTable at = new AzureTable(_accountName, _accountKey, _ssl, _tableName);     List<MyData2> res = at.FetchWithGuid<MyData2>("", "Message,Level,Severity");     return res; } Faster Results With Sequential Fetch Methods Developing a faster API wasn’t a primary objective; but it appears that the performance tests performed with the Enzo Azure API deliver the data a little faster out of the box (5%-10% on average, and sometimes to up 50% faster) with the sequential fetch methods. Although the amount of data is the same regardless of the approach (and the REST calls are almost exactly identical), the object mapping approach is different. So it is likely that the slight performance increase is due to a lighter API. Using LINQ offers many advantages and tremendous flexibility; nevertheless when fetching data it seems that the Enzo Azure API delivers faster.  For example, the same code previously discussed delivered the following results when fetching 3,000 entities (about 1KB each). The average elapsed time shows that the Azure SDK returned the 3000 entities in about 5.9 seconds on average, while the Enzo Azure API took 4.2 seconds on average (39% improvement). With Fetch Strategies When using the fetch strategies we are no longer comparing apples to apples; the Azure SDK is not designed to implement fetch strategies out of the box, so you would need to code the strategies yourself. Nevertheless I wanted to provide out of the box capabilities, and as a result you see a test that returned about 10,000 entities (1KB each entity), and an average execution time over 5 runs. The Azure SDK implemented a sequential fetch while the Enzo Azure API implemented the List fetch strategy. The fetch strategy was 2.3 times faster. Note that the following test hit a limit on my network bandwidth quickly (3.56Mbps), so the results of the fetch strategy is significantly below what it could be with a higher bandwidth. Additional Methods The API wouldn’t be complete without support for a few important methods other than the fetch methods discussed previously. The Enzo Azure API offers these additional capabilities: - Support for batch updates, deletes and inserts - Conversion of entities to DataRow, and List<> to a DataTable - Extension methods for Delete, Merge, Update, Insert - Support for asynchronous calls and cancellation - Support for fetch statistics (total bytes, total REST calls, retries…) For more information, visit http://www.bluesyntax.net or go directly to the Enzo Azure API page (http://www.bluesyntax.net/EnzoAzureAPI.aspx). About Herve Roggero Herve Roggero, Windows Azure MVP, is the founder of Blue Syntax Consulting, a company specialized in cloud computing products and services. Herve's experience includes software development, architecture, database administration and senior management with both global corporations and startup companies. Herve holds multiple certifications, including an MCDBA, MCSE, MCSD. He also holds a Master's degree in Business Administration from Indiana University. Herve is the co-author of "PRO SQL Azure" from Apress and runs the Azure Florida Association (on LinkedIn: http://www.linkedin.com/groups?gid=4177626). For more information on Blue Syntax Consulting, visit www.bluesyntax.net.

    Read the article

  • Any algorithm to dedicate a set of known resources to a set of known requirements (scheduling)

    - by Saeed Neamati
    I'm developing an application to help school principals in dedicating teachers to classes and courses over the hours of a week (scheduling). The scenario is roughly something like this: User enters the list of teachers and their free times into the system User enters the list of courses for this semester User enters the list of available classes into the system Well, up to here, there is no big deal. Just simple CRUD operations and nothing extraordinary. However, now what makes this system useful is that the application should automatically and based on an algorithm create the semester scheduling. I think you've got the main idea here. For example application should suggest that teacher A should go to class 1 for mathematics, and at the same time teacher B should go to class 2 for physics. This way all of the classes would be dedicated to lessons and teacher times won't overlap each other. Piece a cake for school principal. However, I can't find a good algorithm for this resource dedication. I mean it seems hard to me. Searching Google resulted in articles from different websites, but they are of no help and use to me. For example: http://en.wikipedia.org/wiki/Resource_allocation or http://en.wikipedia.org/wiki/Scheduling_(production_processes) Is there any algorithm out there, or any application or engine which can help me here? Does this requirements have a known name, like for example time scheduling engine? Any help would be appreciated.

    Read the article

< Previous Page | 143 144 145 146 147 148 149 150 151 152 153 154  | Next Page >