Search Results

Search found 24350 results on 974 pages for 'bug a lot'.

Page 129/974 | < Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >

  • Corrupted files, hard drive test?

    - by all-R
    Hi guys, I'm currently on a macbook with a 1TB external hard drive connected trough a USB hub wich is connected on my macbook. The problem is, my disk, wich is partitioned in 2 (one HFS+ and one NTFS) keeps getting corrupted, recently it was my HFS+ partition, I could not repair it using the Apple's Disk utility, but was able to backup my files. Is it synonym that my hard drive is failing? Is it because of my USB hub? I also keep all my iTunes library on my external HD (HFS+ partition), and did a lot of transfer lately, adding files, removing etc. the last time, my partition got corrupted after a lot of deleted items. If anybody has an idea of what to check first, what could cause the problem, I would appreciate it :) Thanks!

    Read the article

  • Game programming : C# or C++?

    - by Chronix
    Ok so the faq says : "What language should I learn next? (Unless you have a specific requirement and don't know which language meets that requirement.)" so I guess this is not against the rules of questions. So, I've decided what I really want is to do Game Programming. So the question is, as a 18 years old who wants to learn self taught programming, what is the most suited programming language between C# and C++? ( I should state that I don't care about unix because I believe windows will be still the most used os ) I know the basics of C++, but none about C#. I know that C++ has more tutorials guides, dlls and stuff like that, while C# doesn't, but it's far easier to learn and to use for a single person to develop a program, but I've read even if the program is obfuscated it's easy to get the source of it. Well, I kinda want to focus on basic a lot first, I must say that I have a preference for C++ just because it feels more suitable to me, but I must consider that I work alone.. So even if I like it, it may not be the best thing to do. I am really not sure of which one to go for, i've read a lot of threads on various websites and it looks like C# is becoming more popular than C++. But yeah, that said, I also specified I want to do Game Programming. So I need to know some better points than just "C# is easier because of .net memory handling while C++ isnt" because I couldn't just find it. I hope the thread won't be closed because I've read the faq and I have a specific requirement. Thank you, and if you need more details you're free to ask me, but I think just by saying game programming and that I'd work alone should be enough!

    Read the article

  • How to build an API on top of an existing Rails app with NodeJs and what architecture to use?

    - by javiayala
    The explanation I was recently hired by a company that has an old RoR 2.3 application with more than 100k users, a strong SEO strategy with more than 170k indexed urls, native android and ios applications and other custom-made mobile and web applications that rely on a not so good API from the same RoR app. They recently merged with a company from another country as an strategy to grow the business and the profit. They have almost the same stats, a similar strategy and mobile apps. We have just decided that we need to merge the data from both companies and to start a new app from scratch since the RoR app is to old and heavily patched and the app from the other company was built with a custom PHP framework without any documentation. The only good news is that both databases are in MySQL and have a similar structure. The challenge I need to build a new version that: can handle a lot of traffic, preserves the SEO strategies of both companies, serve 2 different domains, and have a strong API that can support legacy mobile apps from both companies and be ready for a new set of native apps. I want to use RoR 3.2 for the main web apps and NodeJs with a Restful API. I know that I need to be very careful with the mobile apps and handle multiple versions of the API. I also think that I need to create a service that can handle a lot IO request since the apps is heavily used to create orders for restaurants at a certain time of the day. The questions With all this in mind: What type of architecture do you recommend me to follow? What gems or node packages do you think will work the best? How do I build a new rails app and keep using the same database structure? Should I use NodeJS to build an API or just build a new service with Ruby? I know that I'm asking to much from you guys, but please help me by answering any topic that you can or by pointing me on the right direction. All your comments and feedback will be extremely appreciated! Thanks!

    Read the article

  • Can I eventually consider myself a professional developer if I don't have a CS degree? [on hold]

    - by heltonbiker
    Question first, context later: If I am a dedicated, self-taught programmer, always seeking top-quality books AND READING THEM, while successfully applying all that new knowledge into my current work, could I call myself (and offer my work as) a PROFESSIONAL developer? How limiting (or how common) that is nowadays? I am afraid that, no matter how hard I study and practice, it could be too difficult to compete with "actual", college graduated developers, and potential employers might have doubts investing in an "undergraduated" person. Now, context: My former profession is from healthcare sector, then I studied mechanical engineering (quit in the middle), then I studied product design (master degree), and I ended up working (very happily) at an engineering company that manufactures medical devices. For more than two years now my main activity in this company is software development. The devices contain software, and we gave up hiring software development (domain knowledge needed, too much communication cost). My current company sees a lot of value in what I do, but I cannot afford the risk of depending on this single company for the rest of my life, you get it. But a lot of job offers require some minimal formal education, usually a CS degree. Fact is that I am sure this is my target profession, I don't plan to go to another area, it is a pleasure to dive into books that normal people would consider unreadable, but I'm 36 years old and can't see going back to college as a viable alternative.

    Read the article

  • How is the MAC address on a computer determined?

    - by Zero Stack
    While imaging some computers today, I started to wonder... what if two LAN MAC addresses on two different computers matched?... That would cause some problems. I later came to understand that the MAC address' 48-bit address space contains potentially 248 or 281,474,976,710,656 possible MAC addresses. [ in other-words, a lot of networking devices ] How are these MAC addresses determined? Will we ever run out of them? ( I know the second question is speculation, but there are a lot of devices that require a mac addresses...) Do MAC addresses get recycled?

    Read the article

  • What is an effective way to familiarize yourself with a new application in a new language? [closed]

    - by codeninja
    Possible Duplicate: How do I pick up a new language quickly, given I know several others? I started a new job working on an application I'm vaguely familar with, and it's in PERL! I come from a PHP and Java background, so while I understand the basics, there are lot of nuances in PERL that make it troublesome. updated < Im supposed to be a UI developer, but the smallness of the office requires me to learn and do a lot more than just javascript. So that was slightly unexpected in some aspects and I'm just thinking about what approach to take with this /updated So far I've been sifting through the code to understand what each part does, printed out copies of code and try to lookup APIs I'm not familiar with, and so I dunno how effective this process is -- I feel like it's gonna take some time -- and I dont want my new employers to feel like I'm not being productive. Anyone have some ideas or approaches for this kind of situation? I read some of the questions about learning new languages, but I'm curious to see if anyone's had experience with this with PERL.

    Read the article

  • MySql transfer / update (a bit specific)

    - by Jeff
    before posting I was digging whole site but didn't find help for my problem, so I hope someone will help... Facts: 30 Gb mysql database on remote server (about 20.000.000 rows) data are once weekly updated in local network (mysql) I need to transfer/replace local updated database with remote connection is about 2mb (real mb, not mbps) up/down Point is that I can't have 'down time' of remote mysql server. Until now I Tried: navicat data sync - Ok, but take about 3 days to finish dbForge - ok but need 5 days to finish mysql dump transfer to remote server and execution - about day, but a lot of downtime rsync folder with database /mysql/lib/MY_DATABASE - 4 hours, but after that I need to execute always 'repir on remote server' which takes about 2 hours, and a lot of down time mysql dump piped from cl to directly goto server - still now satisfied many problems I could give you more things that I tried... mysql replication - slow Anyase, what is best,best way to: refresh remote mysql on weekly level and in same time to have 0 sec down time nor huge server load If you have any idea please share

    Read the article

  • How to keep programs from source up to date?

    - by wizard
    I'm designing a new server setup for hosting multiple websites. (Shared hosting for my clients over at SliceHost.) I've recently moved away from the traditional LAMP setup and chosen Ubuntu, Nginx, php-fpm and mysql. I like it a lot better then my old Apache, suphp, mysql setup. It works great, provided encapsulation between sites and uses substantiallly less memory. However I have one major maintenance problem. In order to have a recent version of Nginx and in order to use php-fpm I've had to compile these programs from source. The reason I see this as a problem is that keeping track of updates, and build configurations will end up being a lot of work. For two programs (and a patch) I can handle it, but it seems like this setup would not scale with many packages and servers. Are there good ways to manage this situation? I'm sure people do this all the time.

    Read the article

  • Economical way to get many harddrives into rack mount?

    - by Industrial
    Hi everyone, Please bear with me as being a bit of a newcomer at 19" rack-mounted equipment. I've thought a fair bit lately about the best way of getting 4x or 6x of 2.5" hard drives into my rack and are right now currently slightly confused about would be the best (economical) solution. After scouting the market, I've found this type of disk array units that offers built in RAID and a lot of drive slots and a truckload of geek cred, but at a price that just isn't going to fit in my budget. I've also found these type of cute adapters that takes two 2.5" drives in one 3.5" slot, but I will obviously need a chassie with a lot of 3.5" spaces in order to make it work. So what is the most economical way to house my harddrives in my rack?

    Read the article

  • System File Checker vs Service Pack Reinstall

    - by Nixphoe
    When trying to repair slow workstations, I've found that running sfc /scannow helps quite a lot in a few of my environments running really old computers. I've also seen recommendations of reinstalling the last service pack after software installation to help keep the system stable. That makes sense as it would replace a lot of the dll files with the ones that would come with the service pack. They both seem to do the same thing, but SFC some times will ask for a disk, where the Service Pack will not. What is the main difference between the two?

    Read the article

  • How often do netbook parts break?

    - by kurresmack
    I need to know a list of how many percentage of all components break within a year. For example I would need to know how many percent of all netbook RAM is calculated to break within a year? This is a lot to ask, I know. But I really do need to know some facts on what to except to break when you have a lot of netbooks. Would be glad if someone had hard facts that could be backed up with resources. Only netbooks are considered.

    Read the article

  • Only show changed files while syncing from ext4 to NTFS

    - by qox
    I would like rsync to print modified and deleted files. The verbose option (-v) does print modified files but also the list of subdirectories, maybe because touched directories are considered modified. Since I sync a lot of files from a lot of subdirectories, it's impossible to see the actual changes. So, is there a way to not print directories using rsync ? Im not looking for grep -v "*/$" kind of answers since it would also exclude new directories. Command I am using: rsync -avh --delete /media/data/src /media/data/bkp And everytime it prints the list of all directories: src/dir1/ src/dir1/sdir1/ src/dir1/sdir2/ src/dir2/ ..... Thanks for your help. EDIT: Ok, after some intensive tests .. It doesn't print all directories when syncing from an ext4 partition to an ext4 and from NTFS to NTFS. It only does when syncing from ext4 to NTFS .. And options '-c' or '--omit-dir-times' don't change that.

    Read the article

  • To program in free time as a programmer, is to show that programming is passion. If not, is the programmer good? [closed]

    - by SonofWatson
    Possible Duplicate: I don't program in my spare time. Does that make me a bad developer? A lot of blogs and advice on the web seem to suggest that in order to become a great developer, doing just your day job is not enough. For example, you should contribute to open source projects in your spare time, write smartphone apps, etc. In fact a lot of this advice seems to suggest that if you don't love programming enough to do it all day long then you're probably in the wrong career. That doesn't ring true with me. I enjoy my work, but when I come home from the office I'm not in the mood to jump straight back onto the computer and start coding away until bedtime. I only have a certain number of hours free time each day, and I'd rather spend them on other hobbies, seeing friends or going outside than in front of the computer. I do get a kick out of programming, and do hack around outside of work occasionally. I'm committed to my personal development and spend time reading tech blogs and books as a way to keep learning and becoming better. But that doesn't extend so far as to my wanting to use all my spare time for coding. Does this mean I'm not a 'true' software developer at heart? Is it possible to become a good software developer without doing extra outside your job? I'd be very interested to hear what you think.

    Read the article

  • When you’re on a high, start something big

    - by BuckWoody
    Most days are pretty average – we have some highs, some lows, and just regular old work to do. But some days the sun is shining, your co-workers are especially nice, and everything just falls into place. You really *enjoy* what you do. Don’t let that moment pass. All of us have “big” projects that we need to tackle. Things that are going to take a long time, and a lot of money. Those kinds of data projects take a LOT of planning, and many times we put that off just to get to the day’s work. I’ve found that the “high” moments are the perfect time to take on these big projects. I’m more focused, and more importantly, more positive. And as the quote goes, “whether you think you can or you think you can’t, you’re probably right.” You’ll find a way to make it happen if you’re in a positive mood. Now – having those “great days” is actually something you can influence, but I’ll save that topic for a future post. I have a project to work on. :) Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • What kind of redirect (301 or 302) for an email links tracker?

    - by MaxiWheat
    We are developing an email sending application ("à la" Mailchimp). Hyperlinks inserted by our users, in the emails they want to send, are replaced by a tracking URL on our application (https://ourdomain.com/trackingurl?blablabla) which then redirects the email reader to the original URL our users included in their emails. This allows us to record statistics about link clicks. Until now, we used 301 for those redirections, but we noticed that Google began indexing pages on our application which are in fact redirects to other domains. (The title and snippet in Google results are from the other domain, but the link in green is from our application). We took action by adding those urls to our robots.txt, but Google seems to take forever (months!) before removing them for its index and removing them by hand in Webmaster Tools would take a lot of time since there are lot. I would like to know which kind of HTTP redirect (301 or 302) is best suited for this kind of opreation ? Do you think switching to 302 redirects could improve this situation since we don't really want Google to index redirected links from our clients emails ?

    Read the article

  • Can an SSD notify the hosting OS that its wear level is getting high?

    - by Tony_Henrich
    I read a lot about SSDs and I am interested in them for server use. My biggest concern is their reliability. A lot of writes shortens their life span. I can mitigate this problem if I can run some kind of diagnostics on a regular basis on the SSD or if the SSD can automatically warn the OS that its reliability is reaching a critical level. Think of this as S.M.A.R.T or software like SpinRite for SSDs. Does anything I mentioned exist now? Which kind/brand of SSD does this? I don't mind swapping out a tired SSD for a newer one once a while. I am pretty sure that SSDs life is calculated in years and not in few months? For me, the improved performance will pay for the SSD over and over. I am planning to use plenty of RAM as well.

    Read the article

  • OBIEE Capacity Planning

    - by THE
    I can not even recall how many times I was asked by a customer what size the machine should be bought to run our Software. Unfortunately Tech Support is not even the right address to answer that question, as a purchase decision is closely tied to the answer. Hence, Tech Support has been limited to the answer: "The biggest machine you can afford" . Many Customers were unhappy with that and have tried to get us to be more precise and that causes a lot of explanation and lengthy discussion. In the end no one is wiser or happier.  Therefore I am happy to report that at least for OBIEE the decision has just been made a whole lot easier. Have a look at the note Oracle BI EE 11g Architectural Deployment: Capacity Planning (Doc ID 1323646.1) The document attached to that note gives you a good overview for teh sizing of the machines that Oracle recommends to run OBIEE (be it a small installation or a bigger distributed installation) If you have any more questions about this topic and what machines we recommend, then get in contact with  Oracle Consulting or speak to your sales representative.

    Read the article

  • How many xml http requests is too much for a pc to handle?

    - by Uri
    I'm running mediawiki on an apache on a regular pc running vista (don't know the specific specs, but I'm guessing at least duo core 2 2 giga hertz processor, broadband connection (500 kb/s at least, probably 1 mega). I want to use the MediaWiki api to send a lot of requests to this server. Most of the time the requests will be sent through LAN (but sometimes through the internet). I'm talking thousands of requests every few seconds at worst case. (A lot of these requests may repeat themselves, I guess some sort of cache would help) Will the server handle this, or do I need a stronger/dedicated computer? (I'm not looking for specific yes/no, but just want to get an idea as to what configuration of computer will support how many request per second) Thanks

    Read the article

  • Scale an image with unscalable parts

    - by Uko
    Brief description of problem: imagine having some vector picture(s) and text annotations on the sides outside of the picture(s). Now the task is to scale the whole composition while preserving the aspect ratio in order to fit some view-port. The tricky part is that the text is not scalable only the picture(s). The distance between text and the image is still relative to the whole image, but the text size is always a constant. Example: let's assume that our total composition is two times larger than a view-port. Then we can just scale it by 1/2. But because the text parts are a fixed font size, they will become larger than we expect and won't fit in the view-port. One option I can think of is an iterative process where we repeatedly scale our composition until the delta between it and the view-port satisfies some precision. But this algorithm is quite costly as it involves working with the graphics and the image may be composed of a lot of components which will lead to a lot of matrix computations. What's more, this solution seems to be hard to debug, extend, etc. Are there any other approaches to solving this scaling problem?

    Read the article

  • What constitutes "commercial purposes"?

    - by RoboShop
    I'm looking at this license. It says that I can use it for "non-commercial purposes". What does that mean? I see in Stack Exchange, under Network Profile there is that graph that tracks your points across your Stack Exchange accounts. It uses a control called HighCharts which have a paid and Creative Commons licensed version. So would Stack Overflow constitute a commercial site? We don't pay to use this site, but obviously the site makes money from ads, etc. Then again, there's a lot of sites that have ads who won't necessarily make a profit, it may only be subsiding their costs. But even then, you could argue that even if it is only subsiding their costs, a lot of IT companies run at a loss in order to build a big enough customer base. So where is the line here? Is it any website on the internet? Is it any website that has ads? Is it any website that turns over a profit?

    Read the article

  • Using Subdomains for Newly Regional Company

    - by Taylord22
    The company I work for is expanding their business to new territories. I've got a lot of stabilization to do in the region/state where we're one of the most well known companies of our kind. Currently, we have 3 distinct product lines which are currently distinguished by 3 separate URLS. This is affecting the user flow of our site, so we'd like to clean it up before launching our products into the various regions. The business has decided to grow into 5 new states (one state consisting of one county only) — none of which will feature all 3 products. Our homebase state is the only one that will have all 3 products this year. My initial thought was to use subdomains to separate out the regions, that way we could use a canonical tag to stabilize the root domain (which would feature home state content, and support content for all regions), and remove us from potential duplicate content penalization. Our product content will be nearly identical across the regions for the first year. I second guessed myself by thinking that it was perhaps better to use a "[product].root/region" URL instead. And I'm currently stuck by wondering if it was not better to build out subdomains for products and regions...using one modifier or the other as a funnel/branding page into the other. For instance, user lands on "region.root.com" and sees exactly what products we offer in that region. Basically, a tailored landing page. Meanwhile the bulk of the product content would actually live under "product.root.com/region/page". My head is spinning. And while searching for similar questions I also bumped into reference of another tag meant to be used in some similar cases to mine. I feel like there's a lot of risks involved in this subdomain strategy, but I also can't help but see the benefits in the user flow.

    Read the article

  • How to maximize parallel download from S3

    - by StCee
    I got a lot of images to load from Amazon S3 on a single page, and sometimes it takes quite some time to load all the images. I heard that splitting the images to load from different sub-domains would help parallel downloads, however what is the actual implementation on that? While it is easy to split for sub-domains like static,image, etc; Should I make like 10 sub-domains (image1, image2...) to load say 100 images? Or is there some clever ways to do? (By the way I am considering using memcache to cache the S3images; I am not sure if it is possible. I would be grateful for any further comments. Thanks a lot!

    Read the article

  • Why are browsers so heavy?

    - by Kaivosukeltaja
    Back in 1998 I had a computer with 233MHz Pentium MMX CPU and a GFX card with no 3D acceleration. It was able to run games like Quake II at a decent FPS rate. My current computer has tons more performance and a mid-class GPU, yet struggles to reach 20 FPS when rendering a single model inside a skybox with WebGL. Even regular pages with lots of 2D CSS animations bring many modern computers to their metaphorical knees. As a web developer I understand there's a lot going on in a web page but not what makes it that heavy. Modern browsers compile JavaScript to CPU native machine code before running it and rendering into a canvas element shouldn't trigger DOM rebuilds so theoretically it should be a lot faster than it is. What am I missing here and is it possible to avoid or minimize whatever is making the browsers slow to build more efficient websites?

    Read the article

  • Creating my own PHP framework

    - by onlineapplab.com
    Disclaimer: I don't want to start any flame war so there will not be no name of any framework mentioned. I've been using quite many from the existing PHP frameworks and my experience in each case was similar: everything is nice a the beginning but in the moment you require something non standard you get into lot of problems to fix otherwise simple issues. In case of frameworks following the MVC design pattern there are some issues with the implementation of each layer for example there is a lot of codding used for model and data access with using ORM and presentation is not much more than pure phtml. Some frameworks use their own wrappers for existing PHP functionality and in some cases severely limiting original functionality. Depending on framework you can have additional problems like lack of documentation, slow or non existent development cycle and last but not least speed. While ago I made my own framework which while doing it's job and being used for few different applications after couple of years more of experience with PHP doesn't seem to be perfect piece of codding. I could write my own framework and use additional experience I've gathered during these years to make it better on the other hand I'm aware that there is quite many better programmers working on creating/upgrading existing frameworks. So does it make at all nay sense to write my own PHP framework if there is so many possibilities to choose from?

    Read the article

  • downgrading the php version

    - by aadiahg
    I used to upgrade my php from 5.3.5-1ubuntu7.11 to 5.3.18-1~dotdeb.0 I got a lot of problems after the upgrading process . My localhost/phpmyadmin displaying the blank screen. apache2 show me a warning message waiting [Sun Nov 04 12:11:21 2012] [warn] The Alias directive in /etc/apache2/conf.d/phpmyadmin.conf at line 3 will probably never match because it overlaps an earlier Alias. most of CMS cant installed and show me this error Required MySQL version for CMS is 5.x but this server has: mysqlnd 5.0.8-dev the Scripts that has been already installed some of there functions doesn’t work i've googled a lot to fix this problems also i ve googled about downgrading the php version from 5.3.18 to 5.3.x but it doesn’t work with me Can you help please Many thanks

    Read the article

< Previous Page | 125 126 127 128 129 130 131 132 133 134 135 136  | Next Page >