Search Results

Search found 28593 results on 1144 pages for 'best pratices'.

Page 740/1144 | < Previous Page | 736 737 738 739 740 741 742 743 744 745 746 747  | Next Page >

  • Laser range finder, what language to use? Beginner advice

    - by DrOnline
    I hope this is the right place. I am a programming beginner, and I want to make a laser range finder, and I need advice about how to proceed etc. In a few weeks I will get a lot of dirt cheap 3-5V lasers and some cheap usb webcams. I will point the laser and webcam in parallel, and somehow use trigonometry and programming to determined distance. I have seen online that others made done it this way, I have purposefully not looked at the details too much because I want to develop it on my own, and learn, but I know the general outline. I have a general idea of how to proceed. The program loads in a picture from the webcam, and I dunno how images work really, but I imagine there is a format that is basically an array of RGB values.. is this right? I will load in the red values, and find the most red one. I know the height difference between the laser and the cam. I know the center dot in the image, I know the redmost dot. I'm sure there's some way to figure out some range there. TO THE POINT: 1) Is my reasoning sound thus far, especially in terms of image analysis? I don't need complete solutions, just general points 2) What I need to figure out, is what platform to use. I have an arduino... apparently, I've read it's too weak to process images. Read that online. I know some C I know some Python I have Matlab. Which is the best option? I do not need high sampling rates, I have not decided on whether it should be automated or whether I should make a GUI with a button to press for samples. I will keep it simple and expand I think. I also do not need it to be super accurate, I'm just having fun here. Advice!

    Read the article

  • Browser Item Caching and URLs

    - by Damon Armstrong
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it’s very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn’t want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn’t need to be removed from production code – just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don’t modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

  • Browser Item Caching and URLs

    - by Damon
    Ultimately you want the browser to cache things like Flash components, Silverlight XAP files, and images to avoid users having to download them each time they hit a page.  But during development it's very useful to NOT have things cached so you are always looking at the most up-to-date file.  You can always turn off caching on your browser, but if you use your browser for daily browsing then its not the greatest option.  To avoid caching we would always just slap a randomly generated GUID to the back of the URL of any items we didn't want to cache (e.g. http://someserver.com/images/image.png?15f073f5-45fc-47b2-993b-fbaa781b926d).  It worked well, but you had to remember to remove the random GUID when it went to production. However, on a GimmalSoft project we recently implemented someone showed me a better way that didn't need to be removed from production code - just slap the last modified date of the file on the end of the URL (or something generated from the modification date).  This was kind of genius approach because it gives you the best of both world.  If you modify the file, the browser goes out and gets the newest version.  If you don't modify the file, it has the cached copy.  Very helpful!  The only down side is that you do have to read the modification date from the file, which does technically take some time.

    Read the article

  • need a different backup solution

    - by DigitalJedi
    I just built a new media/backup server using Ubuntu 12.04 64bit. I installed a hard drive to be used only for music, pictures, and videos and formatted it fat32 so my 1 and only Windows PC could map those folders as netshares. My laptop, also running Ubuntu 12.04, is what I am using the most so new media is first downloaded on my laptop. I've already got the music, videos, and pictures folders from my server mounting as shares on my laptop on boot thanks to some fstab edits and sshfs. Now I'm wanting either an app or script that could backup any new files I add to my local media folders to the mounted folders on my server. I've been Googling all day and found a few apps like rsync but they seem to have issues with ext4 to vfat backups. I thought maybe a script would be best but I'm new to scripting in Linux and don't want to mess anything up. Basically I am looking for something that will backup only newly added files to the server. I figure I could schedule it once a week. There are some stipulations. For example, my local music folder has over 700 folders for each artist/band then sub folders inside those for albums. I want something smart enough to only copy newly added content so I'm guessing the modified date would probably be a good condition if I were scripting. I'm rambling. Any suggestions would be GREATLY appreciated. I'm not finding anything to suit my needs. I'm almost to the point of just learning bas scripting so I can write something but then it will be a couple weeks or so before I have a possible solution and I'd like something in place sooner.

    Read the article

  • How to ALWAYS link an image in MS Word 2010 instead of embedding it?

    - by grunwald2.0
    I want to only link, not embed, pictures in my Word document. Yet I got a lot of images to insert! And I want to spare me the additional click into the dropdown menu one and for all, so the question is, is there any way to set "link image(s)" as default? Because I didn't find anything in the Word settings, only totally useless detail settings. This "detail" has been overseen by Microsoft it seems! :( I would provide you with a screenshot, but it would be German anyways. I think the guys who use image linking in Word since v2007, will know what I mean. Thank you in advance! Best regards

    Read the article

  • Perfmon quick rundown

    - by anon
    I've known quite along while about performance monitor on Windows. I have decided now to create a scheduled performance monitoring of my entire system so I can find bottlenecks for future improvements. So as you can imagine this is going to run 24/7 so I can identify peak utilization. With performance monitor on Windows 7 for example where are the logs stored (c:\perfmon)? Is there a log size? Even better is the a website that can get me up to speed with scheduling and best practices of perfmon? (I don't need an explanation of what I can monitor)

    Read the article

  • Disable word completion dialog when pressing escape in Safari

    - by Peter
    Behavior: Load Safari 5.0 on mac. Press command+f to search for some text. Type something, and find it on the page. Hit esc to cancel the search. Irritatingly, get word completion menu rather than canceling. Is there any way to make esc cancel the search, like it used to with Safari 4.0, instead of pulling up a word completion dialog? It's very annoying. Failing that, what's the best way to cancel the search with the keyboard? Note: this also happens in any text field, the search box, the location bar etc.

    Read the article

  • Turning a running Linux system into a KVM instance on another machine

    - by Charles
    I have two physical machines that I wish to virtualize. I can not (physically) plug the hard drives from either machine into the new machine that will act as their VM host, so I think that copying the entire structure of the system over using dd is out of the question. How can I best go about migrating these machines from their hardware to the KVM environment? I've set up empty, unformatted LVM logical volumes to host their filesystems, with the understanding that giving the VMs a real partition to work with achieves higher performance than sticking an image on the filesystem. Would I be better off creating new OS installs and rsyncing the differences over? FWIW, the two machines to be VM'd are running CentOS 5, and the host machine is running Ubuntu Server 10.04 for no particularly important reason. I doubt this matters too much, as it's still going to be KVM and libvert that matter.

    Read the article

  • Running VMWare EXS(i) on Apple Xserve

    - by xzyfer
    So we're running VMWare Esx(i) (I'm not completely sure which as I'm out of the office) running on Windows Server 2008. However it turns out the machine we're running it on has serious hardware limitations, most importantly it's restricted to 4gb of ram. We've since inherited a much more powerful server. The problem being the new server is an Apple Xserver running, I believe, Snow Leopard Server. My question is, can I run VMWare Exs(i) on Xserver, or an equivalent? I'm done some hardcore Googling and the best that I can find is that it's not supported, but it might work, but there are no guarantees (this has been stated many times on the VMWare forums by the VMWare support staff). But all these search results are years old, so I can't find any recent answers regarding this. Has anyone accomplished this?

    Read the article

  • MVC 4 Authentication

    - by Aligned
    First: After searching for awhile to figure out what’s new/different with MVC 4 and forms authentication, this is the best article I've found on the subject: http://weblogs.asp.net/jgalloway/archive/2012/08/29/simplemembership-membership-providers-universal-providers-and-the-new-asp-net-4-5-web-forms-and-asp-net-mvc-4-templates.aspx Some quotes from the article: “The ASP.NET Web Pages team designed SimpleMembership to (wait for it) simplify the task of dealing with membership” "WSAT is built to work with ASP.NET Membership, and is not compatible with Simple Membership. There are two main options there: Use the WebSecurity and OAuthWebSecurity API to manage the users and roles Create a web admin using the above APIs Since SimpleMembership runs on top of your database, you can update your users as you would any other data - via EF or even in direct database edits (in development, of course)" “If you want to use an existing ASP.NET Membership Provider in ASP.NET MVC 4, you can't use the new AccountController. You can do a few things:” “Universal Providers do not work with Simple Membership.” ~ this post (look for Bob.at.SBS’s answer) says Universal Providers is not needed for MVC 4 to work in Azure)   I've been trying to figure out the Forms Authentication in MVC4. It's different than the past approach (aspnet_regsql). If you do file new project -> MVC 4 -> internet application, you get a really nice template with the controller and model setup for you. However, the tables are different than using aspnet_regsql and the ASP.Net Configuration tool (WSAT) wasn’t connecting to the data I had (it was creating an App_Data/aspnet.mdf file, which I didn’t see right away). Points of Note The database tables are created in the SimpleMembershipInitializer class, when you first run your app using Entity Framework 5 migration functionality. The tables created are webpages_Membership, webpages_OAuthMembership, webpages_Roles, webpages_UsersInRoles, UserProfile. Web.config settings don’t seem to be needed.   Scott Hanselman on Universal Providers was also useful if not somewhat out dated. Universal Providers and SimpleMembership are not compatible. http://www.asp.net/web-pages/tutorials/security/16-adding-security-and-membership – walk-through

    Read the article

  • On Developing Web Services with Global State

    - by user74418
    I'm new to web programming. I'm more experienced and comfortable with client-side code. Recently, I've been dabbling in web programming through Python's Google App Engine. I ran into some difficulty while trying to write some simple apps for the purposes of learning, mainly involving how to maintain some kind of consistent universally-accessible state for the application. I tried to write a simple queueing management system, the kind you would expect to be used in a small clinic, or at a cafeteria. Typically, this is done with hardware. You take a number from a ticketing machine, and when your number is displayed or called you approach the counter for service. Alternatively, you could be given a small pager, which will beep or vibrate when it is your turn to receive service. The former is somewhat better in that you have an idea of how many people are still ahead of you in the queue. In this situation, the global state is the last number in queue, which needs to be updated whenever a request is made to the server. I'm not sure how to best to store and maintain this value in a GAE context. The solution I thought of was to keep the value in the Datastore, attempt to query it during a ticket request, update the value, and then re-store it with put. My problem is that I haven't figured out how to lock the resource so that other requests do not check the value while it is in the middle of being updated. I am concerned that I may end up ticket requests that have the same queue number. Also, the whole solution feels awkward to me. I was wondering if there was a more natural way to accomplish this without having to go through the Datastore. Can anyone with more experience in this domain provide some advice on how to approach the design of the above application?

    Read the article

  • Getting expired domain name - most effective route?

    - by kcdwayne
    There is a domain name I have been wanting for years that was used as a parked page (read: cybersquatted) that has entered into WHOIS's redemptionPeriod stage. The domain has been expired for 61 days now - after contacting the Registar, they informed me that it would stay in redemption for 75 days. After which, it would either be sold to resellers, or sent back to the public registry. (I have since sent a followup message requesting the reseller they use.) My question is: What is the best way to proceed? I know there is at least one competitor that would love to have this name, but I'm unsure if they even know it's expiring. I did not tell the Registar the domain in question, as they seem geared towards cybersquatting, and I do not trust them. Domain Front Running sucks. Should I use a backorder service? Should I just take my chances and try to grab it after75 days? I checked an auctionhouse by manually browsing their expired domains - it wasn't there.

    Read the article

  • What are your most useful textexander (or similar) snippets?

    - by P.Bjorklund
    Textexpander is a program that aims to save you time by auto-replacing snippets of text with the content of your choice, or to quote their Web site: "Save yourself time and effort by typing short abbreviations for frequently-used text and images." So for instance when you type ,h1 it will change it to <h1></h1> with the cursor placed between the <. After some searching I have yet to find a resource/forum-thread/whatnot that discuss the uses of this marvelous program. I am therefor looking for your best snippets or a link to a resource where I can find this. Oh and one thing I can think of right away is sigw, sigp for my work/personal email signature.

    Read the article

  • Running a service as root

    - by kovica
    I have a java program that I use to automate the process of creating VPN settings for clients. The program calls couple of bash scripts, create and copies files around. I have to run it under root user because the whole VPN config is under /etc/openvpn. For this directory I need root privileges. On the same machine I have Glassfish application server and it will call the mentioned Java program. Glassfish is run under non-root user. What is the best, most secure way of running a program as a root user of course without entering a password if I run it via sudo?

    Read the article

  • Need to include Calendar and Email in own CRM system. Whose?

    - by PurplePilot
    I am writing a web based application that needs to have some elements of CRM in it but I cannot use an of-the-shelf CRM to do what I want. (Honestly we have been through it all and it will not work). Now while Tasks, Calls, Meetings and Notes are straightforward the idea of reinventing Mail and Calendars seems a waste of time and effort and also unproductive as most users already have their own and it simply adds to the complexity of my application and hacks users off. My thoughts are going around using Outlook and or GMail/iCal and or Mac Mail/iCal and or Thunderbird and importing the relevant data or if possible integrating it into the application. Any thoughts? Anyone got any experience of this can point me in a few directions. N.B. Not looking for an answer as too complex just some pointers and thoughts. Thanks. p.s. We did look at Sugar CRM as the basis for our project and it is useful to get best practice from but as I say it was not useable due to how we are structuring our software, not Sugar's fault.

    Read the article

  • Why do I have inconsistent network issues with my laptop's wireless?

    - by Jason
    I'm having trouble with my laptop Internet connection. It's patchy at best and resets or freezes a lot. The signal strength is also random. I thought it might be a driver issue but now I don't know. Three other computers using the same wireless network run well. I've switched out wireless routers so I don't think it's the router. I thought it might be the laptop's internal wireless card but I just bought an external USB network card and I'm still having problems. Specs Lenovo T-60p Windows 7 Ultimate Edition Patches/drivers are up to date I only use one of the below at a time, disabling the other: Intel PRO/Wireless 3945ABG v. 13.3.0.137 (Internal wireless) Medialink Wireless-N USB 2.0 Adapter (USB wireless) Any ideas on what might be the problem?

    Read the article

  • model association or controller?

    - by andybritton
    I'm trying to create a rails app that allows users to submit information about their pets. I've come to a point where my knowledge is limited and I don't know enough about what/how this could be done so I'm hoping this will be relatively easy to answer. At the moment I have a model called Pet, this model currently stores basic information like name, picture etc but it also holds more specific data like type, breed, date of birth etc. What I would like to be able to do is create a page that can match various records without having to be manually categorized if that makes sense so a users pet could be matched to other pets with the same breed, age etc. I've read about nested models as I understand this information could be submitted to 2 models in one form but I am not sure whether this could be done directly in a separate controller which would only be visible to users with pets in these matched "groups" if that makes sense. So in essence is it best practice to use 1 table to store all the information and just use a controller to match pets based on rows having the same values or would it be far simpler to have a form with a nested model and link 2 tables together? The main feature needs to be matching without a user having to create a group or categorize pets so the second model would need to add id's to an array instead of just creating more and more rows.

    Read the article

  • Cache that always returns immediate response?

    - by Col Wilson
    I have a web service that takes a while to build a response despite being tuned as best I can. What I'd like is some sort of cache sitting in front of the service which would always return the last known value from the service, but at the same time pass the request back to the service to build an up to date response for the next request. I'm aware of the limitations that this puts on the freshness of the data, but you can assume that I'm happy to live with that. The technologies I'm using at present are python uwsgi via nginx, but that need not be a limit to any solution you might suggest. Col

    Read the article

  • High-level strategy for distinguishing a regular string from invalid JSON (ie. JSON-like string detection)

    - by Jonline
    Disclaimer On Absence of Code: I have no code to post because I haven't started writing; was looking for more theoretical guidance as I doubt I'll have trouble coding it but am pretty befuddled on what approach(es) would yield best results. I'm not seeking any code, either, though; just direction. Dilemma I'm toying with adding a "magic method"-style feature to a UI I'm building for a client, and it would require intelligently detecting whether or not a string was meant to be JSON as against a simple string. I had considered these general ideas: Look for a sort of arbitrarily-determined acceptable ratio of the frequency of JSON-like syntax (ie. regex to find strings separated by colons; look for colons between curly-braces, etc.) to the number of quote-encapsulated strings + nulls, bools and ints/floats. But the smaller the data set, the more fickle this would get look for key identifiers like opening and closing curly braces... not sure if there even are more easy identifiers, and this doesn't appeal anyway because it's so prescriptive about the kinds of mistakes it could find try incrementally parsing chunks, as those between curly braces, and seeing what proportion of these fractional statements turn out to be valid JSON; this seems like it would suffer less than (1) from smaller datasets, but would probably be much more processing-intensive, and very susceptible to a missing or inverted brace Just curious if the computational folks or algorithm pros out there had any approaches in mind that my semantics-oriented brain might have missed. PS: It occurs to me that natural language processing, about which I am totally ignorant, might be a cool approach; but, if NLP is a good strategy here, it sort of doesn't matter because I have zero experience with it and don't have time to learn & then implement/ this feature isn't worth it to the client.

    Read the article

  • How to calculate proper amount of inode/block sizes for a linux filesystem.

    - by Donatello
    I have an old reiser filesystem which I'm going to convert to Ext3. The problem I have is to determine the proper block- and inode-sizes for this partition. The partition is 44 GB large and has to hold 3,000,000+ files of sizes between 1 kb and 10kb, how can I figure out the best ratio of inodes and blocksize? The below is something I tried which seems OK but makes the copying files incredibly slow. mkfs.ext3 -t ext3 -c -c -b 1024 -i 4096 -I 128 -v -j -O sparse_super,filetype,has_journal /dev/sdb1 Thanks.

    Read the article

  • Web-Server directory permissions

    - by MLS
    Hello All, I would like some help understanding web-server directory permissions. Apache, CentOS, PHP, Mysql Example, I have multiple sites in /var/www/html They are in paths like: /var/www/html/www_domainname_com inside each site I might have a path like /lib/mysql/ like PHP connect stuff, database config, etc. What should me permissions be so that someone cannot just browse to that directory? Should I just .htaccess them? I have apache:apache as the owner of all my web directories. Can I prevent someone from crawling certain directories of my web-server? I have a robots.txt, but what is to say the crawler obeys it? So to sum up: 1. What is the best owner/permission set for my sensitive files that the web-server or php or mysql needs, but I dont want people browsing to? Can I prevent straight out crawling of portions?

    Read the article

  • How can I fix a desktop right click delay in Windows 7?

    - by Xm7X
    I am looking for information on how to properly troubleshoot a desktop right click delay in Windows 7 after 3rd party apps have been installed. I did find this program ShellMenuView. This will allow me to disable the context menu items of explorer. I can now use process of elimination to find the problem. Is this the best way to fix this issue? I would also like to avoid installing more apps to solve the problem. Can I go directly to the registry to fix this? Thanks

    Read the article

  • colliding btRigidBody objects behave strangely when moving slowly

    - by Piku
    I'm trying to use Bullet Physics in my iOS game. The engine appears to be correctly compiled in that the demos work fine. In my game I have the player's ship and some enemy ships. They're defined as btRigidBody objects and btCollisionObjects and I'm using btSphereShapes for collision. At 'fast' speeds, collisions appear to happen sensibly - things collide and nothing goes 'weird'. If the speeds are very slow though and the player's ship touches a non-moving object the collision happens, but then the player's ship moves at incredible speed over the next few frames and appears a long distance from where it collided - completely out of proportion to the speed it was moving before impact. To move the things around I'm using setLinearVelocity() each frame, ticking the physics engine, then using getMotionState() to update the rendering code I have. Part of the issue might be I don't quite understand how to set the correct mass or what the best speeds are to use for anything. I'm mostly sticking numbers in and seeing what happens. Should I be using Bullet in this way, and are there any guidelines for deciding on the mass of objects? (am I right in assuming that in collisions heavier objects will force lighter objects to move more)

    Read the article

  • How to synchronize a whole Ubuntu?

    - by Avio
    I think that the time is ripe to have my whole Ubuntu synchronized just as my Dropbox folder is. Given that we are always talking about files and directories, what's the difference between my Documents folder and my /usr system directory? Almost none, except for their location. In fact, I think that there is just one big issue that prevents people to have their beloved installations mirrored wherever they go: symlinks. Dropbox, Google Drive, Ubuntu One, Sugarsync, Skydrive, none of these services support symlinking. This means that if I push a symlink in one of the synced folders, locally the symlink is kept as is, but remotely (in the cloud or on the other synced machines) the symlink is resolved to the actual file that was originally pointed to. This completely disrupts Linux installations, thus these services can't be used for this purpose. So the question is. Does anybody knows a way to achieve this? A whole Ubuntu, always synchronized with a remote running copy, but still locally stored on both disks? My best guess is that I could use NFS. But the main difference between Dropbox and NFS is that NFS is a remote filesystem that always forces to remotely access the files, while Dropbox pushes modifcations to local filesystems (and thus would perform better). I've also heard about NFS caching. Does anybody knows if this solution could approximate Dropbox in this sense? P.s. I know that /boot, /dev, /proc, /run, /tmp and device-specific mountpoints in /mnt and /media will have to be left out the sync mechanism. What I'm interested in is the principle. Can this be done with reasonable performance, having reasonable resources (e.g. ~ 1Mbps upload bandwidth and a public IP address)?

    Read the article

  • SEO consideration for duplicate sites

    - by Malk
    I am building a brochure-ware website for a company that sells products all across the world. They need the site to ask the user what region they are in before using the site; there are 5 regions. This is because there are different products offered to different regions and each region may or may not want to customize their own content. However, at launch and likely forever, most of the pages will be the exact same minus what is listed in the footer and in the product selection menu. My question is how should I structure the sitemap for this site for best SEO? Should I be concerned with duplicate content penalties and/or cannibalizing the site's presence on the SERP? Some considerations: The client wants to be able to print links directly to regional specific content bypassing any prompt for the user to select a region (to ensure they land on the target page). The client cannot have a 'default' region so the user must have a region specified "Clean" urls are important, but there is wiggle room The client does not want each region to have its own domain There will be a link on the page to allow users to specify a different region The client is not concerned with localization ...at this time Some products are available in multiple regions A quick list of options I am considering: www.site.com/region/page region.site.com/page www.site.com/page?region (no cookie, pages require the parameter. If visited without; the user must select a region) www.site.com/page (using cookie and a splash screen if needed; could pass parameter in to set the region for direct linking) Thanks in advance for your advice.

    Read the article

< Previous Page | 736 737 738 739 740 741 742 743 744 745 746 747  | Next Page >