Search Results

Search found 12089 results on 484 pages for 'rule of three'.

Page 267/484 | < Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >

  • Multilingual sites and Google search results, using sub-folders for language

    - by AWinter
    About three months ago we added an English version of our, previously Japanese only, site under the subfolder /en/ we've tried to follow the sometimes incomplete best practices laid out by Google by adding alternate tags to all pages that are currently translated. The top page for instance has the following meta tags for language. <link rel="canonical" href="/"> <link rel="alternate" hreflang="ja" href="/"> <link rel="alternate" hreflang="en" href="/en/"> While the English main page under /en/ has <link rel="canonical" href="/en/"> <link rel="alternate" hreflang="ja" href="/"> <link rel="alternate" hreflang="en" href="/en/"> Alternate languages are setup in the sitemap. (as per Google's recommendations) It seems however that Google absolutely refuses to show the English top page in results when the user is using English at google.com if you search you'll, as of this post, get the Japanese description and a title that Google has apparently invented instead of the title and description in the meta-tags for the /en/ index page. Does anyone have any experience with subfolders actually working to affect search results? What are the best practices for ensuring that the correct language version of my website is displayed through Google and other search engines? And how long will it take before the new language version becomes prominent in search engine results?

    Read the article

  • Only one user can connect to Ubuntu samba server

    - by StaticMethod
    I setup a samba server on 12.04 LTS, and it works great for one user but not the others. I am trying to map a network drive from a windows 7 laptop. I can successfully authenticate with one user, but the other two both get "Access is denied" errors. Here is my smb.conf file. [global] server string = %h server (Samba, Ubuntu) map to guest = Bad User obey pam restrictions = Yes pam password change = Yes passwd program = /usr/bin/passwd %u passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* . unix password sync = Yes syslog = 0 log file = /var/log/samba/log.%m max log size = 1000 dns proxy = No usershare allow guests = Yes panic action = /usr/share/samba/panic-action %d idmap config * : backend = tdb [printers] comment = All Printers path = /var/spool/samba create mask = 0700 printable = Yes print ok = Yes browseable = No [print$] comment = Printer Drivers path = /var/lib/samba/printers [share] comment = Ubuntu File Server Share path = /srv/share read only = No create mask = 0755 I know that the service is successfully reading from the /etc/passwd file because if I change the Linux password for the user that works, I have to use the new password when I connect. I changed all the users so they are all members of the same groups (all three users are admins anyway). I only ever have one user connected at a time. Here are the permissions on the shared folder /srv$ ls -l drwxrwxrwx 1 nobody nogroup 16 Feb 22 17:05 share Any ideas?

    Read the article

  • Static pages for large photo album

    - by Phil P
    I'm looking for advice on software for managing a largish photo album for a website. 2000+ pictures, one-time drop (probably). I normally use MarginalHack's album, which does what I want: pre-generate thumbnails and HTML for the pictures, so I can serve without needing a dynamic run-time, so there's less attack surface to worry about. However, it doesn't handle pagination or the like, so it's unwieldy for this case. This is a one-time drop for pictures from a wedding, with a shared usercode/password for distribution to the guests; I don't wish to put the pictures in a third-party hosting environment. I don't wish to use PHP, simply because that's another run-time to worry about, I might relent and use something dynamic if it's Python or Perl based (as I can maintain things written in those). I currently have: Apache serving static files, Album-generated, some sub-directories to divide up the content to be a little more manageable. Something like Album but with pagination already handled would be great, but I'm willing to have something a little more dynamic, if it lets people comment or caption and store the extra data in something like an sqlite DB. I'd want something light-weight, not a full-blown CMS with security updates every three months. I don't want to upload pictures of other peoples' children into a third-party free service where I don't know what the revenue model is. (For my site: revenue is none, costs out of pocket). Existing server hosting is *nix, Apache, some WSGI. Client-side I have MacOS. Any advice?

    Read the article

  • Hard time installing Ubuntu

    - by Nick
    I have a MSI GT780DXR that currently is booting windows 7. I've been trying to dual boot windows 7 and ubuntu for some time now. Here's specs that I think would make a difference Windows 7 500GB*2 RAID 0 hard drives. (Hardware RAID I'm not sure if it's a dedicated RAID card though) 7200RPM Nvidia GT570M Background: I tried to install 12.04 (64 bit) a few times but the Desktop live cd and pendrive boots with a black screen. I've tried wubi but it boots to a black screen as well. I then tried the alternative 12.04 (64 bit) and went through the installation all the way til partitioning. I let Ubuntu notice the raid setup and I setup my swap, /, and home drives, I used my free space to create the three partitions. I tried to resize the windows drive and it told me I couldn't and to be happy with my current setup. When I finally got past I got an error on installing GRUB 2 and decided to skip it and continued on to finish installation. When I tried to boot up I got an invalid partition table error. Windows recovery disc, and a GPARTED live cd couldn't find any hard drives. I ended up following advice and typed this into the recovery command prompt. bootrec /fixmbr bootrec /fixboot bootrec /rebuildBcd It worked and here I am now. The question is, how would I be able to dual boot windows 7 and Ubuntu 12.04 with this information? Thanks,

    Read the article

  • Floating point undesirable in highly critical code?

    - by Kirt Undercoffer
    Question 11 in the Software Quality section of "IEEE Computer Society Real-World Software Engineering Problems", Naveda, Seidman, lists fp computation as undesirable because "the accuracy of the computations cannot be guaranteed". This is in the context of computing acceleration for an emergency braking system for a high speed train. This thinking seems to be invoking possible errors in small differences between measurements of a moving object but small differences at slow speeds aren't a problem (or shouldn't be), small differences between two measurements at high speed are irrelevant - can there be a problem with small roundoff errors during deceleration for an emergency braking system? This problem has been observed with airplane braking systems resulting in hydroplaning but could this actually happen in the context of a high speed train? The concern about fp errors seems to not be well-founded in this context. Any insight? The fp is used for acceleration so perhaps the concern is inching over a speed limit? But fp should be just fine if they use a double in whatever implementation language. The actual problem in the text states: During the inspection of the code for the emergency braking system of a new high speed train (a highly critical, real-time application), the review team identifies several characteristics of the code. Which of these characteristics are generally viewed as undesirable? The code contains three recursive functions (well that one is obvious). The computation of acceleration uses floating point arithmetic. All other computations use integer arithmetic. The code contains one linked list that uses dynamic memory allocation (second obvious problem). All inputs are checked to determine that they are within expected bounds before they are used.

    Read the article

  • New Write Flash SSDs and more disk trays

    - by Steve Tunstall
    In case you haven't heard, the Write SSDs the ZFSSA have been updated. Much faster now for the same price. Sweet. The new write-flash SSDs have a new part number of 7105026 , so make sure you order the right ones. It's important to note that you MUST be on code level 2011.1.4.0 or higher to use these. They have increased in IOPS from 6,000 to 11,000, and increased throughput from 200MB/s to 350MB/s.    Also, you can now add six SAS HBAs (up from 4) to the 7420, allowing one to have three SAS channels with 12 disk trays each, for a new total of 36 disk trays. With 3TB drives, that's 2.5 Petabytes. Is that enough for you? Make sure you add new cards to the correct slots. I've talked about this before, but here is the handy-dandy matrix again so you don't have to go find it. Remember the rules: You can have 6 of any one kind of card (like six 10GigE cards), but you only really get 8 slots, since you have two SAS cards no matter what. If you want more than 12 disk trays, you need two more SAS cards, so think about expansion later, too. In fact, if you are going to have two different speeds of drives, in other words you want to mix 15K speed and 7,200 speed drives in the same system, I would highly recommend two different SAS channels. So I would want four SAS cards in that system, no matter how many trays you have. 

    Read the article

  • Expanding RAID-5

    - by Garry
    I'm new to RAID and trying to get my head around things. I have owned a Drobo in the past (which I liked) but it failed. Here's a hypothetical scenario: Assume I set up a RAID-5 array consisting of four 1TB hot-swappable 2.5" SATA drives. I name this volume 'My Data'. By my calculations, that would give me 2.7TB of usable space and the ability to recover if a single drive fails. I have a few questions: What happens if I pull out a single 1TB drive and replace it with a 2TB drive? Would the array automatically rebuild itself with no issues? Would the maximum capacity remain 2.7TB? If number (1) above is true and the array rebuilds itself with three 1TB drives one 2TB drive what would happen if I then pulled another 1TB drive out and stuck in a 2TB drive (you can see where I'm going here can't you). Would I eventually be able to gain more storage by gradually adding bigger drives? From a practical point of view, how much input is required from me as the end user whilst these drives are being pulled out and put in? On the Drobo, the storage space just automagically handles itself. Would I have to be actively involved in telling Ubuntu what was going on or would any of it be automated? Thanks in advance,

    Read the article

  • TraceTune: Improved Comment View

    - by Bill Graziano
    I wanted an easier way to identify queries I’d already looked at so I could skip them.  I’ve been entering comments for each query as I review it.  These comments typically fall into three categories: a change I made, no easy fix available or something needs to be changed on the client.  TraceTune now highlights any statement with a comment in bold.  If you hover over the statement you’ll see the most recent comment for that statement. This gives me a quick way to see what’s new and identify those queries that still need work.  This is especially helpful when I come back to a server after weeks or months away.  These comments help jar my memory and remind me what I’ve worked on. I made the font slightly smaller in some of the tables.  It’s still readable but I’m able to get more of a SQL statement on the screen.  I also got to re-experience the pain of Internet Explorer, Chrome and FireFox all displaying text (and pop-up text) slightly different. Seeing the comments on a trace has been a big help to me.  I often do a round of tuning and then don’t come back to a server until months later.  Having the comments available helps me get back up to speed quickly.

    Read the article

  • Java - Using Linear Coordinates to Check Against AI [closed]

    - by Oliver Jones
    I'm working on some artificial intelligence, and I want my AI not to run into given coordinates as these are references of a wall/boundary. To begin with, every time my AI hits a wall, it makes a reference to that position (x,y). When it hits the same wall three times, it uses linear check points to 'imagine' there is a wall going through these coordinates. I want to now prevent my AI from going into that wall again. To detect if my coordinates make a straight line, i use: private boolean collinear(double x1, double y1, double x2, double y2, double x3, double y3) { return (y1 - y2) * (x1 - x3) == (y1 - y3) * (x1 - x2); } This returns true is the given points are linear to one another. So my problems are: How do I determine whether my robot is approaching the wall from its current trajectory? Instead of Java 'imagining' theres a line from 1, to 3. But to 'imagine' a line all the way through these linear coordinantes, until infinity (or close). I have a feeling this is going to require some confusing trigonometry? (REPOST: http://stackoverflow.com/questions/13542592/java-using-linear-coordinates-to-check-against-ai)

    Read the article

  • The clock problem - to if or not to if?

    - by trejder
    Let's say, we have a simple digital clock. To "power" it, we use a routine executed every second. We update seconds part in it. But, what about minutes and hours part? What is better / more professional / offers better performance: Ignore all checking and update hour, minute and seconds part each time, every second. Use if + a variable for checking, if 60 (or 3600) seconds passed and update minute / hour part only at that precise moments. This leads us to a question, what is better -- unnecessary drawings (first approach) or extra ifs? I've just spotted a Javascript digital clock, one of millions similar on one of billions pages. And I noticed that all three parts (hours, minutes and seconds) are updated every second, though first changes its value only once per 3600 seconds and second once per 60 seconds. I'm not to experienced developer, so I might me wrong. But everything, what I've learnt up until now, tells me, that if are far better then executing drawing / refreshing sequences only to draw the same content.

    Read the article

  • Will you choose JavaFX for Development?

    - by javafx4you
    A few weeks ago, a poll on the home page of java.net caught my eyes, because it was related to JavaFX. Its title: Will you use JavaFX for development once it's fully ported to Mac and Linux platforms? Usually, the results for this type of polls are published on the editor's Daily Blog soon after the poll closes. For some reason, this didn't happen for the JavaFX poll, so I'll take a shot at interpreting the results.  The results found on java.net look pretty close to the following: Although this way to look at the results already gives us an idea of how much traction JavaFX is getting, there are just too many type of answers that make it hard to read. The answers "maybe" and "I don't know" are awfully similar, so I'm tempted to collapse these together. Then there is "No, I don't do that type of development" that just doesn't belong here, as obviously developers who ave chosen this answer don't develop Rich Internet Apps, and therefore I will adapt the % results accordingly. Finally, I've been tempted to combine the top three categories just t simplify the results. This gives me the following chart:  Whether you prefer the original graph, or my simplified take on it, one thing is sure:  less than 10% of developers who have taken this poll plan to stick to another toolkit (presumably Swing or SWT), while the vast majority is inclined to use JavaFX. When you take into account that JavaFX 2.0 is pretty much a "new" API (no more JavaFX Script), I think these are some pretty good results, 6 months after the official release of JavaFX 2.0.

    Read the article

  • Installing 13.04 on an EFI partition - Share with Windows 8?

    - by mengelkoch
    Information I've found here suggests that for my system, I need to install 13.04 into an EFI-type partition, since it needs to boot as UEFI. I also understand it is advisable to have only ONE EFI partition on the disk; I've read here that it is OK for Ubuntu and Windows to share the same partition (please confirm). When I try to install into the existing EFI drive, I get the message "No root file system is defined. Please correct from partitioning menu." Do I change the EFI boot partition to another type? Doesn't that defeat the purpose? If I change it to Ext4 Journaling File System, I am given the opportunity to define the '/' Mount point. I haven't proceeded beyond this point for fear I am going to destroy Windows 8 by altering this partition. BTW, I created three partitions in Windows before installing, per the helpful response to my previous question. But if I try to install into the partition I created for Ubuntu, I get the "No root file system..." error again.

    Read the article

  • Endeca Information Discovery 3-Day Hands-on Training Boot-Camp

    - by Mike.Hallett(at)Oracle-BI&EPM
    For Oracle Partners, on October 15-17, 2012 in Paris, France: Register here. The Oracle Endeca Information Discovery (OEID) Boot-Camp is designed to give partners an understanding of OEID’s features, and how it complements the existing Oracle Business Intelligence suite. Participants will learn how to develop & implement solutions using a Data Discovery method.  Training is in English. What will be covered? The Oracle Endeca Information Discovery (OEID) Boot Camp is a three-day class with a combination of lecture and hands-on exercises, tailored to make participants aware of the Oracle Endeca Information Discovery platform, and to gain valuable skills for the implementation of projects.   Prerequisites You must bring a laptop with you for the Hands-on labs: Attendees should have experience and familiarity with the basic concepts of business intelligence and be OPN Partners with Gold or above membership.  This training is free to OPN Partners. Click here for more information. Where and When ? Monday, October 15th until Wednesday, October 17th included  9:00 - 18:00 Oracle France 15, boulevard Charles de Gaulle 92715 Colombes: Access Venue Map Register here  : NOTE there is a Limited number of seats, you will get confirmation within 2 weeks.

    Read the article

  • Best way to update UI when dealing with data synchronization

    - by developerdoug
    I'm working on a bug at work. The app is written in Objective-C for iOS based device, for the iPad. I'm the new guy there and I've been given a hard task. Sometimes, the UIButton text property does not show the correct state when syncing. Basically, when the app is syncing, my UI control would say "Syncing" and when its not syncing it'll display "Updated @ [specific date]". Right now there is a property on the app delegate called "SyncInProgress". When querying / syncing, occurring on background thread, it updates a counter. The property will return a bool checking expression 'counter 0'. There are three states I need to deal with. Sync has started. Sync is updating tables. Sync finished. These items need to occur in order. My coworker suggested to take a state based approach instead of just responding to events. I'm not sure about how to go about that. Would it be best to have the UI receive a notification to determine what state its in or to pull every so often if state changed? Here are two posts that I put on stackoverflow, in the last few days, that relate to this. http://stackoverflow.com/questions/11025469/ios-syncing-using-a-state-approach-instead-of-just-reacting-to-events http://stackoverflow.com/questions/11037930/viewcontroller-when-viewwillappear-called-does-not-always-correctly-reflect-stat Any ideas that anyone might have to very much appreciated. Thanks, developerDoug

    Read the article

  • How to Structure a Trinary state in DB and Application

    - by ABMagil
    How should I structure, in the DB especially, but also in the application, a trinary state? For instance, I have user feedback records which need to be reviewed before they are presented to the general public. This means a feedback reviewer must see the unreviewed feedback, then approve or reject them. I can think of a couple ways to represent this: Two boolean flags: Seen/Unseen and Approved/Rejected. This is the simplest and probably the smallest database solution (presumably boolean fields are simple bits). The downside is that there are really only three states I care about (unseen/approved/rejected) and this creates four states, including one I don't care about (a record which is seen but not approved or rejected is essentially unseen). String column in the DB with constants/enum in application. Using Rating::APPROVED_STATE within the application and letting it equal whatever it wants in the DB. This is a larger column in the db and I'm concerned about doing string comparisons whenever I need these records. Perhaps mitigatable with an index? Single boolean column, but allow nulls. A true is approved, a false is rejected. A null is unseen. Not sure the pros/cons of this solution. What are the rules I should use to guide my choice? I'm already thinking in terms of DB size and the cost of finding records based on state, as well as the readability of code the ends up using this structure.

    Read the article

  • XNA 2D Top Down game - FOREACH didn't work for checking Enemy and Switch-Tile

    - by aldroid16
    Here is the gameplay. There is three condition. The player step on a Switch-Tile and it became false. 1) When the Enemy step on it (trapped) AND the player step on it too, the Enemy will be destroyed. 2) But when the Enemy step on it AND the player DIDN'T step on it too, the Enemy will be escaped. 3) If the Switch-Tile condition is true then nothing happened. The effect is activated when the Switch tile is false (player step on the Switch-Tile). Because there are a lot of Enemy and a lot of Switch-Tile, I have to use foreach loop. The problem is after the Enemy is ESCAPED (case 2) and step on another Switch-Tile again, nothing happened to the enemy! I didn't know what's wrong. The effect should be the same, but the Enemy pass the Switch tile like nothing happened (They should be trapped) Can someone tell me what's wrong? Here is the code : public static void switchUpdate(GameTime gameTime) { foreach (SwitchTile switch in switchTiles) { foreach (Enemy enemy in EnemyManager.Enemies) { if (switch.Active == false) { if (!enemy.Destroyed) { if (switch.IsCircleColliding(enemy.EnemyBase.WorldCenter, enemy.EnemyBase.CollisionRadius)) { enemy.EnemySpeed = 10; //reducing Enemy Speed if it enemy is step on the Tile (for about two seconds) enemy.Trapped = true; float elapsed = (float)gameTime.ElapsedGameTime.Milliseconds; moveCounter += elapsed; if (moveCounter> minMoveTime) { //After two seconds, if the player didn't step on Switch-Tile. //The Enemy escaped and its speed back to normal enemy.EnemySpeed = 60f; enemy.Trapped = false; } } } } else if (switch.Active == true && enemy.Trapped == true && switch.IsCircleColliding(enemy.EnemyBase.WorldCenter, enemy.EnemyBase.CollisionRadius) ) { //When the Player step on Switch-Tile and //there is an enemy too on this tile which was trapped = Destroy Enemy enemy.Destroyed = true; } } } }

    Read the article

  • Have there been attempts to make object containers that search for valid programs by auto wiring compatible components?

    - by Aaron Anodide
    I hope this post isn't too "Fringe" - I'm sure someone will just kill it if it is :) Three things made me want to reach out about this now: Decoupling is so in the forefront of design. TDD inspires the idea that it doesn't matter how a program comes to exist as long as it works. Seeing how often the adapter pattern is applied to achieve (1). I'm almost sure this has been tried from a memory of reading about it around the year 2000 or so. If I had to guess, it was maybe about and earlier version of the Java Spring framework. At this time we were not so far from days when the belief was that computer programs could exhibit useful emergent behavior. I think the article said it didn't work, but it didn't say it was impossible. I wonder if since then it has been deemed impossible or simply an illusion due to a false assumption of similarity between a brain and a CPU. I know this illusion existed because I had an internship in 1996 where I programmed neural nets that were supposedly going to exhibit "brain damage". STILL, after all that, I'm sitting around this morning and not able to shake the idea that it should be possible to have a method of programming to allow autonomous components to find each other, attempt to collaborate and their outputs evaluated against a set desired results.

    Read the article

  • A better alternative to incompatible implementations for the same interface?

    - by glenatron
    I am working on a piece of code which performs a set task in several parallel environments where the behaviour of the different components in the task are similar but quite different. This means that my implementations are quite different but they are all based on the relationships between the same interfaces, something like this: IDataReader -> ContinuousDataReader -> ChunkedDataReader IDataProcessor -> ContinuousDataProcessor -> ChunkedDataProcessor IDataWriter -> ContinuousDataWriter -> ChunkedDataWriter So that in either environment we have an IDataReader, IDataProcessor and IDataWriter and then we can use Dependency Injection to ensure that we have the correct one of each for the current environment, so if we are working with data in chunks we use the ChunkedDataReader, ChunkedDataProcessor and ChunkedDataWriter and if we have continuous data we have the continuous versions. However the behaviour of these classes is quite different internally and one could certainly not go from a ContinuousDataReader to the ChunkedDataReader even though they are both IDataProcessors. This feels to me as though it is incorrect ( possibly an LSP violation? ) and certainly not a theoretically correct way of working. It is almost as though the "real" interface here is the combination of all three classes. Unfortunately in the project I am working on with the deadlines we are working to, we're pretty much stuck with this design, but if we had a little more elbow room, what would be a better design approach in this kind of scenario?

    Read the article

  • radeon display driver clones monitors while using Xinerama

    - by gregmuellegger
    I'm trying to get my two Radeon HD 4770 cards working with three monitors. Xinerama works so far in the way that I have two fully working monitors were I can move windows from one to the other. My problem now is that my third monitor is a clone of my second monitor (displaying the exact same thing). These monitors are connected to the same graphic card ("Screen Middle" and "Screen Right" in the xorg.conf below). Here is my xorg.conf: Section "ServerLayout" Identifier "ThreeMonitors" Screen "Screen Left" 0 0 Screen "Screen Middle" RightOf "Screen Left" Screen "Screen Right" RightOf "Screen Middle" Option "Xinerama" EndSection Section "Monitor" Identifier "Monitor Left" Option "DPMS" EndSection Section "Monitor" Identifier "Monitor Middle" Option "DPMS" EndSection Section "Monitor" Identifier "Monitor Right" Option "DPMS" EndSection Section "Device" Identifier "Device Left" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:3:0:0" Screen 0 EndSection Section "Screen" Identifier "Screen Left" Device "Device Left" Monitor "Monitor Left" SubSection "Display" Depth 24 EndSubSection EndSection Section "Device" Identifier "Device Middle" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:2:0:0" Screen 0 EndSection Section "Screen" Identifier "Screen Middle" Device "Device Middle" Monitor "Monitor Middle" SubSection "Display" Depth 24 EndSubSection EndSection Section "Device" Identifier "Device Right" Driver "radeon" VendorName "ATI Technologies Inc" BoardName "ATI Radeon HD 4770 [RV740]" BusID "PCI:2:0:1" Screen 1 EndSection Section "Screen" Identifier "Screen Right" Device "Device Right" Monitor "Monitor Right" SubSection "Display" Depth 24 EndSubSection EndSection I'm using a fresh Kubuntu 10.10 installation with propsed-updates enabled since this repo contains a xorg fix for using multiple graphic cards. I hope someone can help me out. Very many thanks!!

    Read the article

  • What does SVN do better than git?

    - by doug
    No question that the majority of debates over programmer tools distill to either personal choice (by the user) or design emphasis, i.e., optimizing design according to particular uses cases (by the tool builder). Text Editors are probably the most prominent example--a coder who works on a Windows at work and codes in Haskell on the Mac at home, values cross-platform and compiler integration and so chooses Emacs over Textmate, etc. It's less common that a newly introduced technology is genuinely, demonstrably superior to the extant options. I wonder if this is in fact the case with version-control systems, in particular, centralized VCS (CVS, SVN) versus distributed VCS (git, hg)? I used SVN for about five years, and SVN is currently used where I work. A little less than three years ago, I switched to git (and gitHub) for all of my personal projects. I can think of a number of advantages of git over subversion (and which for the most part abstract to advantages of distributed over centralized VCS), but I cannot think of one contra example--some task (that's relevant and arises in a programmers usual workflow) that subversion does better than git. The only conclusion I have drawn from this is that I don't have any data--not that git is better, etc. My guess is that such counter-examples exist, hence this question.

    Read the article

  • How does the new google maps make buildings and cityscapes 3D?

    - by Aerovistae
    Anyone who's seen the new Google maps has no doubt taken note of the incredible amount of three-dimensional detail in select American cities such as Boston, New York, Chicago, and San Francisco. They've even modeled the trees, bridges and some of the boats in the harbor! Minor architectural details are present. It's crazy. Looking at it up close, I've found there's a rectangular area around each of those cities, and anything within them is 3Dified, but it cuts off hard and fast at the edge, even if it's in the middle of a building. The edge of the rectangle is where the 3D stops. This leads me to think it's being done algorithmically (which would make sense, given the scale of the project, how many trees and buildings and details there are), and yet I can't imagine how that's possible. How could an algorithm model all these things without extensive data on their shapes and contours? How could it model the individual wires of a bridge, or the statues in a park? It must be done by hand, and yet how could it be for so much detail! Does anyone have any insight on this?

    Read the article

  • Cron prepending filename to script output

    - by Caitifty
    I'm having an issue with unwanted lines being added to files output by a cron job. I have a script in /etc/cron.hourly which selects some data from a mysql database and saves it in a text file in /var/www. When I run the script as root, it does exactly what I expect it to do. When the script is executed by cron, it creates the same file, but prepends the following three lines at the top of the output file: :::::::::::::: /var/www/outputfilename :::::::::::::: I can't for the life of me work out how to stop this unwanted behavior. The line in /etc/crontab for cron.hourly is the default "44 * * * * root cd / && run-parts --report /etc/cron.hourly". If I use su to change to being root and do "cd / && run-parts --report /etc/cron.hourly" the script runs as expected and the output doesn't have the mysterious additional 3 lines. I've also tried removing the --report flag from the run-parts command in case that was somehow connected, but no joy. Finally, perusing the cron log output in /var/log/syslog just says cron.hourly ran without giving any additional information. Any suggestions on solving this weird problem most welcome..

    Read the article

  • How to keep the trunk stable when tests take a long time?

    - by Oak
    We have three sets of test suites: A "small" suite, taking only a couple of hours to run A "medium" suite that takes multiple hours, usually ran every night (nightly) A "large" suite that takes a week+ to run We also have a bunch of shorter test suites, but I'm not focusing on them here. The current methodology is to run the small suite before each commit to the trunk. Then, the medium suite runs every night, and if in the morning it turned out it failed, we try to isolate which of yesterday's commits was to blame, rollback that commit and retry the tests. A similar process, only at a weekly instead of nightly frequency, is done for the large suite. Unfortunately, the medium suite does fail pretty frequently. That means that the trunk is often unstable, which is extremely annoying when you want to make modifications and test them. It's annoying because when I check out from the trunk, I cannot know for certain it's stable, and if a test fails I cannot know for certain if it's my fault or not. My question is, is there some known methodology for handling these kinds of situations in a way which will leave the trunk always in top shape? e.g. "commit into a special precommit branch which will then periodically update the trunk every time the nightly passes". And does it matter if it's a centralized source control system like SVN or a distributed one like git? By the way I am a junior developer with a limited ability to change things, I'm just trying to understand if there's a way to handle this pain I am experiencing.

    Read the article

  • At the end of my rope

    - by hvgotcodes
    I am a contractor to a big company. Currently, there are three developers on the project, myself included. The problem is the other 2 developers don't really get it. By "it" i mean the following: They don't understand the best practices for the technology we are using. After 6 months of me and others giving them examples there are terrible anti-patterns being used. They are "copy and paste" programmers that produce primarily spaghetti code. They constantly break things, implementing changes but not doing a basic smoke test to see if all is good They refuse/rarely to ask for code-reviews. They refuse/rarely even do basic things like formatting code. No documentation on any classes (jsdocs) Afraid to delete code that doesn't do anything Leave commented code blocks everywhere even though we have version control. I find myself getting more and more frustrated as I format others code, fix bugs, discover functionality that is broken, and create abstractions to remove the spaghetti. I really don't know what to do. I try not to get to frustrated, but it's just a mess. I like these people as people, but I feel like the coding situation is so bad that I could move faster if they simply browsed the web all day. Would it be out of line to ask our manager to review the others svn commit access; commits can only be done after a review by someone who is knowledgeable in what we are doing? As a contractor, I'm not sure if that's the best move. Is there a subtle/not so subtle way of making it clear how many things I am fixing?

    Read the article

  • Should I use a config file or database for storing business rules?

    - by foiseworth
    I have recently been reading The Pragmatic Programmer which states that: Details mess up our pristine code—especially if they change frequently. Every time we have to go in and change the code to accommodate some change in business logic, or in the law, or in management's personal tastes of the day, we run the risk of breaking the system—of introducing a new bug. Hunt, Andrew; Thomas, David (1999-10-20). The Pragmatic Programmer: From Journeyman to Master (Kindle Locations 2651-2653). Pearson Education (USA). Kindle Edition. I am currently programming a web app that has some models that have properties that can only be from a set of values, e.g. (not actual example as the web app data confidential): light-type = sphere / cube / cylinder The light type can only be the above three values but according to TPP I should always code as if they could change and place their values in a config file. As there are several incidents of this throughout the app, my question is: Should I store possibly values like these in: a config file: 'light-types' = array(sphere, cube, cylinder), 'other-type' = value, 'etc = etc-value a single table in a database with one line for each config item a database with a table for each config item (e.g. table: light_types; columns: id, name) some other way? Many thanks for any assistance / expertise offered.

    Read the article

< Previous Page | 263 264 265 266 267 268 269 270 271 272 273 274  | Next Page >