Search Results

Search found 35354 results on 1415 pages for 'joe even'.

Page 157/1415 | < Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >

  • nautilus crash when merging/overwriting files

    - by sBlatt
    On my Ubuntu 10.10, whenever I want to copy some files/folders over some other files/folders, or when I try to empty the trash, nautilus crashes! Example: I have a folder with some files. Now I want to overwrite this folder with a folder with the same name, same files, but some additional files, the merge window comes up, I choose merge and nautilus crashes (does not respond, when I press the close button I can force close it). Some times it even does the copying/emptying (trash), but it always crashes! This happens when copying to the same partition/ntfs partition/netshares, but not when I make a new folder and copy the files/folders into that (without overwriting anything). On a netshare, it's even possible to merge these files afterwards with another computer! dmesg/syslog/messages does not show any entry related to that problem. Does anyone have a solution for this very annoying problem? EDIT: dpkg -l nautilus* (see output in pastebin) EDIT2: I found out, nautilus already crashes before clicking replace/merge (as soon as the question appeares. In the video it's not entirely clear, that i click the cross before the force-close dialog appeares. Video of problem nautilus-debug-log.txt EDIT3: Filed bugreport: https://bugs.launchpad.net/ubuntu/+source/nautilus/+bug/678233

    Read the article

  • At what point would you drop some of your principles of software development for the sake of more money?

    - by MeshMan
    I'd like to throw this question out there to interestingly see where the medium is. I'm going to admit that in my last 12 months, I picked up TDD and a lot of the Agile values in software development. I was so overwhelmed with how much better my development of software became that I would never drop them out of principle. Until...I was offered a contracting role that doubled my take home pay for the year. The company I joined didn't follow any specific methodology, the team hadn't heard of anything like code smells, SOLID, etc., and I certainly wasn't going to get away with spending time doing TDD if the team had never even seen unit testing in practice. Am I a sell out? No, not completely... Code will always been written "cleanly" (as per Uncle Bob's teachings) and the principles of SOLID will always be applied to the code that I write as they are needed. Testing was dropped for me though, the company couldn't afford to have such a unknown handed to the team who quite frankly, even I did create test frameworks, they would never use/maintain the test framework correctly. Using that as an example, what point would you say a developer should never drop his craftsmanship principles for the sake of money/other benefits to them personally? I understand that this can be a very personal opinion on how concerned one is to their own needs, business needs, and the sake of craftsmanship etc. But one can consider that for example testing can be dropped if the company decided they would rather have a test team, than rather understand unit testing in programming, would that be something you could forgive yourself for like I did? So given that there is something you would drop, there usually should be an equal cost in the business that makes up for what you drop - hopefully, unless of course you are pretty much out for lining your own pockets and not community/social collaborating ;). Double your money, go back to RAD? Or walk on, and look for someone doing Agile, and never look back...

    Read the article

  • How to deal with a CEO making all technical decision but with little technical knowledge ?

    - by anonymous
    Hi, Question posted anonymously for obvious reasons. I am working in a company with a dev group of 5-6 developers, and I am in a situation which I have a hard time dealing with. Every technical choice (language, framework, database, database scheme, configuration scheme, etc...) is decided by the CEO, often without much rationale. It is very hard to modify those choices, and his main argument consists in "I don't like this", even though we propose several alternative with detailed pros/cons. He will also decide to rewrite from scratch our core product without giving a reason why, and he never participates to dev meetings because he considers it makes things slower... I am already looking at alternative job opportunities, but I was wondering if there anything we (the developers) could do to improve the situation. Two examples which shocked me: he will ask us to implement something akin to configuration management, but he reject any existing framework because they are not written in the language he likes (even though the implementation language is irrelevant). He also expects us to be able to write those systems in a couple of days, "because it is very simple". he keeps rewriting from scratch on his own our core product because the current codebase is too bad (codebase whose design was his). We are at our third rewrite in one year, each rewrite worse than the previous one. Things I have tried so far is doing elaborate benchmarks on our product (he keeps complaining that our software is too slow, and justifies rewrites to make it faster), implement solutions with existing products as working proof instead of just making pros/cons charts, etc... But still 90 % of those efforts go to the trashbox (never with any kind of rationale behind he does not like it, again), and often get reprimanded because I don't do exactly as he wants (not realizing that what he wants is impossible).

    Read the article

  • Web migration of a VB6 system with VWG

    - by Webgui
    Brinks Bolivia eSAC System (Customer Service) allows to register all different kinds of contacts for a customer; addition to maintaining an updated status of each service or customer request, to have accurate information and perform the appropriate procedures for all applications. The system was originally developed in VB6 and since web access was essential it was offered via Citrix. Since the application's performance was a critical issue as well as the need to offer the system without specific installations the company looked for a solution that would solve those drawbacks of using Citrix. Searching for a solution that would allow it to offer the eSAC system over the web without the need for specific client installations and provide sufficient performance levels even when there is limited bandwidth lead Brinks to a decision to migrate their VB6 Customer Service system to to Visual WebGui. "Developing on Visual WebGui we were able to migrate the system to web environment and even add new features in less time which allows us to offer it over a standard web browser with better performance and no installations as was required with Citrix," concluded Alexander Cuellar. The full article and screenshots of the system are available here.

    Read the article

  • The MsC gray zone: How to deal with the "too unexperienced on engineering/too under-qualified for research" situation?

    - by Hunter2
    Last year I've got a MsC degree on CS. On the beginning of the MsC course, I was keen on moving on with research and go for a PhD. However, as the months passed, I started to feel the urge to write software that people would, well, actually use. The programming bug had bitten me, again. So, I decided that before deciding on getting a PhD degree, I would spend some time on the "real world", working as a software developer. Sadly, most companies here in Brazil are "services" companies that seem to be stuck on the 80s when it comes to software development. I have to fend off pushy managers, less-than-competent coworkers and outrageous software requirements (why does everyone seem to need a 50k Oracle license and a behemoth Websphere AS for their CRUD applications?) on a daily basis, and even though I still love software development, the situation is starting to touch a nerve. And, mind you, I'm already lucky for getting a job at a place that isn't a plain software sweatshop. Sure, there are better places around here or I could always try my luck abroad, but then I hit the proverbial brick wall: Sorry, you're too unexperienced as a developer and too under-qualified as a researcher I've already heard this, and variations of that, multiple times. Research position recruiters look for die-hard, publication-ridden, rockstar PhDs, while development position recruiters look for die-hard, experience-ridden, rockstar programmers. To most, my MsC degree seems like a minor bump on my CV (and an outright waste of time for some). Applying for abroad positions is even harder, since the employer would have to deal of the hassle of a VISA process, which I understand that, sometimes, is too much. Now I'm feeling I've reached a dead-end. I'm certain that development (and not research) is my thing, so should I just dismiss my MsC (or play it as a "trump card") and play the "big fish on a small pond" role while I gather some experience and contribute on some open-source projects as a plus? Is there a better way to handle this?

    Read the article

  • Why is purchasing Microsoft licences such a daunting task? [closed]

    - by John Nevermore
    I've spent 2 frustrating days jumping through hoops and browsing through different local e-shops for VS (Visual Studio) 2010 Pro. And WHS (Windows Home Server) FPP 2011 licenses. I found jack .. - or to be more precise, the closest I found in my country was WHS OEM 2011 licenses after multiple emails sent to individuals found on Microsoft partners page. Question being, why is it so difficult to get your hands on Microsoft licenses as an individual? Sure, you can get the latest end user operating systems from most shops, but when it comes to development tools or server software you are left dry. And companies that do sell licenses most of the time don't even put up pricing or a self service environment for buying the licenses, you need to have an hawk's eye for that shiny little Microsoft partner logo and spam through bunch of emails not knowing, if you can count on them to get the license or not. Sure, i could whip out my credit card and buy the VS 2010 license on the online Microsoft Shop. Well whippideegoddamndoo, they sell that, but they don't sell WHS 11 licenses. Why does a company make it so hard to buy their products? Let's not even talk about the licensing itself being a pain.

    Read the article

  • How to debug suspend?

    - by mlissner
    I've been using Ubuntu for about five years now, and I still can't make it suspend when I want to. It's quite irritating that I can program up a storm, hack the machine in numerous other ways, and yet, and yet when I try to make it suspend or debug suspend, I fail miserably. I need help. Where do I begin to find the problem? What do I do to fix it? I'm placing a bounty on this, because I've literally lost hours of my life to this problem, and leaving my computer on ALL the time is terrible. The symptoms: Pressing suspend brings my computer to a state where it has a blinking cursor, the fans are running, it seems that the HD has turned off (I think), and I can't do anything to bring it back from this state (short of a hard reboot). Possibly related: My fans stay on even after a shutdown, and even then, I have to press the power button for five seconds before I can start it up again. I don't know what logs to look at to debug the problem, and I imagine they'd get nuked on reboot anyway. Please, please help. This drives me completely nuts, and I've been living with it for over a year.

    Read the article

  • Leveraging Code Across Platforms in Ever Bigger Games

    - by ashes999
    Summary: The same way that I continually build complex engines and libraries within a single platform and technology to allow me to build increasingly bigger and better games, how to continue this when development crosses into different platforms? If I switch platforms, how do I leverage past code and experiences? Games are hard to build. Big games are even harder to build. I've decided that to be able to make big games, I need to start building smaller games, and building up an asset base of code, assets (graphics, sounds), tools, and most importantly, game engines, so that I can eventually get there. One game at a time. Let me give an analogy. To build an MMO 3D RPG, I would approach this by building and releasing small games with increasingly more features. This could entail, for example: A simple 2D game A tile-based game A game with RPG elements (items, equipment, monsters, battle) A full-fledged RPG A 3D RPG The problem now is if I have to change platforms or tools, I don't know how to leverage past code-bases (and experience) to start with a mature product. Right now, I'm writing Silverlight (FlatRedBall) games. Let's say I stick with this for ten years, and then suddenly decide to write a PS6 game, which is in a different programming language entirely. Granted, I have ten years of game-development experience (and correspondingly ten years of professional software development experience from my day job) to back me up. But I would still like some way to transplant that 2D RPG engine into the new programming language, or else leverage it somehow. Is this even possible? What are my options?

    Read the article

  • Empty interface to combine multiple interfaces

    - by user1109519
    Suppose you have two interfaces: interface Readable { public void read(); } interface Writable { public void write(); } In some cases the implementing objects can only support one of these but in a lot of cases the implementations will support both interfaces. The people who use the interfaces will have to do something like: // can't write to it without explicit casting Readable myObject = new MyObject(); // can't read from it without explicit casting Writable myObject = new MyObject(); // tight coupling to actual implementation MyObject myObject = new MyObject(); None of these options is terribly convenient, even more so when considering that you want this as a method parameter. One solution would be to declare a wrapping interface: interface TheWholeShabam extends Readable, Writable {} But this has one specific problem: all implementations that support both Readable and Writable have to implement TheWholeShabam if they want to be compatible with people using the interface. Even though it offers nothing apart from the guaranteed presence of both interfaces. Is there a clean solution to this problem or should I go for the wrapper interface? UPDATE It is in fact often necessary to have an object that is both readable and writable so simply seperating the concerns in the arguments is not always a clean solution. UPDATE2 (extracted as answer so it's easier to comment on) UPDATE3 Please beware that the primary usecase for this is not streams (although they too must be supported). Streams make a very specific distinction between input and output and there is a clear separation of responsibilities. Rather, think of something like a bytebuffer where you need one object you can write to and read from, one object that has a very specific state attached to it. These objects exist because they are very useful for some things like asynchronous I/O, encodings,...

    Read the article

  • which style of member-access is preferable

    - by itwasntpete
    the purpose of oop using classes is to encapsulate members from the outer space. i always read that accessing members should be done by methods. for example: template<typename T> class foo_1 { T state_; public: // following below }; the most common doing that by my professor was to have a get and set method. // variant 1 T const& getState() { return state_; } void setState(T const& v) { state_ = v; } or like this: // variant 2 // in my opinion it is easier to read T const& state() { return state_; } void state(T const& v) { state_ = v; } assume the state_ is a variable, which is checked periodically and there is no need to ensure the value (state) is consistent. Is there any disadvantage of accessing the state by reference? for example: // variant 3 // do it by reference T& state() { return state_; } or even directly, if I declare the variable as public. template<typename T> class foo { public: // variant 4 T state; }; In variant 4 I could even ensure consistence by using c++11 atomic. So my question is, which one should I prefer?, Is there any coding standard which would decline one of these pattern? for some code see here

    Read the article

  • 12.04 disabling wireless via dbus does not work

    - by FlabbergastedPickle
    I am using a proprietary rt3652sta driver for my wireless card. It appears as a ra0 device on the 64-bit Ubuntu 12.04. According to the online documentation the following used to work definitely up to 10.04. dbus-send --system --type=method_call --dest=org.freedesktop.NetworkManager /org/freedesktop/NetworkManager org.freedesktop.DBus.Properties.Set string:org.freedesktop.NetworkManager string:WirelessEnabled variant:boolean:false This however has no effect on the aforesaid wireless card in 12.04. Also, rfkill does not work as it does not even list the wireless button (again, likely due to the wireless driver being proprietary): rfkill list It only lists the hci0 (bluetooth) one and one can block/unblock it accordingly but this has no effect on the wifi. ifup/down also does not work (AFAICT)... And this leaves me with disabling wireless through the network manager applet. However, trying to do so via dbus appears not to work and yet I would like to automate it via a script. Any ideas how I could find out the proper dbus structure for the call? Is this even possible in Ubuntu 12.04?

    Read the article

  • Printer share keeps asking for password and I can't authenticate from any machine. Why?

    - by tenshimsm
    Ubuntu 12.04 printer share keeps asking for password and I can't authenticate from any machine. Why?? We've installed it in two machine (to act as printer servers) and we get the same problem. It doesn't matter what we do, change or install. We can't figure out why the printer share asks for password even using all of the users that are registered in the server. What is wrong with Precise? I want it to work without a password, but it is not even working WITH one! I gave up! The samba version that comes with Precise is insufferable! I tried various settings that didn't work. I should've used Mint from the beginning. [Edit] My printers config. Remembering that samba is 3.6.3 in ubuntu 12.04 load printers = yes [printers] comment = All Printers browseable = yes path = /var/spool/samba printable = yes guest ok = yes readonly = yes create mask = 0700 [print$] comment = Printer Drivers path = /var/lib/samba/printers browseable = yes readonly = yes guest ok = yes

    Read the article

  • How can I play .bin file with VLC?

    - by freebird
    For two days I have tried to get vlc play this .bin file. I played it fine in windows xp using vlc. So I decided to install same vlc thats on my xp partition and it plays through Wine the file no problem. Why won't it play natively? fr33bird@fr33bird-desktop:~/Downloads/********/*********$ vlc ********.bin VLC media player 1.1.10 The Luggage (revision exported) Blocked: call to unsetenv("DBUS_ACTIVATION_ADDRESS") Blocked: call to unsetenv("DBUS_ACTIVATION_BUS_TYPE") [0x8387914] main libvlc: Running vlc with the default interface. Use 'cvlc' to use vlc without interface. Blocked: call to setlocale(6, "") Warning: call to srand(1309627174) Warning: call to rand() Blocked: call to setlocale(6, "") (process:14474): Gtk-WARNING **: Locale not supported by C library. Using the fallback 'C' locale. Blocked: call to setlocale(6, "") libdvdnav: Using dvdnav version 4.1.3 libdvdread: Using libdvdcss version 1.2.10 for DVD access libdvdread: Can't stat /home/fr33bird/Downloads/*******/*******/*******.bin No such file or directory libdvdnav: vm: failed to open/read the DVD [0x87034e4] filesystem access error: cannot open file /home/fr33bird/Downloads/*****/*****/*******.bin (No such file or directory) [0x843535c] main input error: open of `file:///home/fr33bird/Downloads/******/****/********.bin' failed: (null) Funny it says no such file even thou it's attempting to read it. Xine says that the .bin is encrypted so I installed libdvdcss2 w32codecs ubuntu-restricted-codecs from Medibuntu still same issue. How can I fix this? I want to run this natively not through wine. I Even tried to change .bin to .mpg and .avi.

    Read the article

  • Designing business objects, and gui actions

    - by fozz
    Developing a product ordering system using Java SE 6. The previous implementations used combo boxes, text fields, and check boxes. Preforming validation on action events from the GUI. The validation includes limiting existing combo boxes items, or even availability. The issue in the old system was that the action was received and all rules were applied to the entire business object. This resulted in a huge event change as options were changed multiple times. To be honest I have no idea how an infinite loop wasn't produced. Through the next iteration I stepped in and attempted to limit the chaos by controlling the order in which the selections could be made. Making configuration of BO's a top down approach. I implemented custom box models, action events, beans/binding, and an MVC pattern. However I still am unable to fully isolate action even chains. I'm thinking that I've approached the whole concept backwards in an attempt to stay closest to what was already in place. So the question becomes what do I design instead? I'm currently considering an implementation of Interfaces, Beans, Property Change Listeners to manage the back and forth. Other thoughts were validation exceptions, dynamic proxies.... I'm sure there are a ton of different ways. To say that one way is right is crazy, and I'm sure it will take a blending of multiple patterns. My knowledge of swing/awt validation is limited, previously I did backend logic only. Other considerations are were some sort of binding(jgoodies or otherwise) to directly bind GUI state to BO's.

    Read the article

  • Making Modular, Reusable and Loosely Coupled MVC Components

    - by Dusan
    I am building MVC3 application and need some general guidelines on how to manage complex client side interaction between my components. Here is my definition of one component in general way: Component which has it's own controller, model and view. All of the component's logic is placed inside these three parts and component is sort of "standalone", it contains it's own form, data needed for interaction, updates itself with Ajax and so on. Beside this internal logic and behavior of the component, it needs to be able to "Talk" to the outside world. By this I mean it should provide data and events (sort of) so when this component gets embedded in pages can notify other components which then can update based on the current state and data. I have an idea to use client ViewModel (in java-script) which would hookup all relevant components on page and control interaction between them. This would make components reusable, modular - independent of the context in which they are used. How would you do this, I am a bit stuck as I do not know if this is a good approach and there is a technical possibility to achieve this using java-script/jquery. The confusing part is about update via Ajax, how to ensure that component is properly linked to ViewModel when component is Ajax updated (or even worse removed or dynamically added). Also, how should this ViewModel be constructed and which technicalities to use here and in components to work as synergy??? On the web, I have found the various examples of the similar approach, but they are oversimplified (even for dummies) or over specific and do not provide valuable resource or general solution for this kind of implementation. If you have some serious examples it would be, also, very helpful. Note: My aim is to make interactions between many components on the same page simpler and more robust and elegant.

    Read the article

  • Xna, after mouse click cpu usage goes 100%

    - by kosnkov
    Hi i have following code and it is enough just if i click on blue window then cpu goes to 100% for like at least one minute even with my i7 4 cores. I just check even with empty project and is the same !!! public class Game1 : Microsoft.Xna.Framework.Game { GraphicsDeviceManager graphics; SpriteBatch spriteBatch; private Texture2D cursorTex; private Vector2 cursorPos; GraphicsDevice device; float xPosition; float yPosition; public Game1() { graphics = new GraphicsDeviceManager(this); Content.RootDirectory = "Content"; } protected override void Initialize() { Viewport vp = GraphicsDevice.Viewport; xPosition = vp.X + (vp.Width / 2); yPosition = vp.Y + (vp.Height / 2); device = graphics.GraphicsDevice; base.Initialize(); } protected override void LoadContent() { spriteBatch = new SpriteBatch(GraphicsDevice); cursorTex = Content.Load<Texture2D>("strzalka"); } protected override void UnloadContent() { // TODO: Unload any non ContentManager content here } protected override void Update(GameTime gameTime) { // Allows the game to exit if (GamePad.GetState(PlayerIndex.One).Buttons.Back == ButtonState.Pressed) this.Exit(); base.Update(gameTime); } protected override void Draw(GameTime gameTime) { GraphicsDevice.Clear(Color.CornflowerBlue); spriteBatch.Begin(); spriteBatch.Draw(cursorTex, cursorPos, Color.White); spriteBatch.End(); base.Draw(gameTime); } }

    Read the article

  • OnTrigger not firing consistently

    - by Lautaro
    I have a Prefab called Player which has a Body and a Sword. The game uses 2 instances of Player, Player1 and Player2. I use Player1 to strike Player2. This is code on the sword. My hope is that Sword of Player1 will log on contct with Body of Player2. It happens but only the first hit and then i have to hit several times before another strike is logged. But when i look at log from OnTriggerStay it looks like the TriggerExit is never detected untill long after the sword is gone. void OnTriggerEnter(Collider other) { //Play sound to confirm collision var sm = ObjectDirectory.soundManager; sm.PlaySoundClip(sm.gui_02); Debug.Log(other.name + " - ENTER" ); } void OnTriggerStay(Collider other) { Debug.Log(other.name + " - collision" ); } void OnTriggerExit(Collider other) { Debug.Log(other.name + " - HAS LEFT" ); } DEBUG LOG: Player2 - ENTER UnityEngine.Debug:Log(Object) SwordControl:OnTriggerEnter(Collider) (at Assets/Scripts/SwordControl.cs:28) Player2 - collision UnityEngine.Debug:Log(Object) SwordControl:OnTriggerStay(Collider) (at Assets/Scripts/SwordControl.cs:34) (The last debug log then repeated hundreds of times long after the sword of player 1 had withdrawn and was in no contact with player 2 ) EDIT: Further tests shows that if i move player1 backwards away form player2 i trigger the OnTriggerExit. Even if the sword is not touching Player2 since after the blow. However even after OnTriggerExit it takes many tries untill i can get another blow registered.

    Read the article

  • Using 12.04 installation as a persistent pen drive

    - by Cawas
    Disclaimer: I aim to build a self contained pen drive with my application inside, so no matter about updates. Maybe I'm looking at the wrong linux distribution to do this... Please let me know if you think so. I've tried knoppix and even lubuntu, but they don't come with enough "drivers" for Unity3D to work. Creating a custom live persistent pen drive is a real pain and I'm trying for 1 day without any success. Sure, being able to do it would probably be ideal and occupy the minimum space. Using the installation image on a pen drive, however, is good enough and is really easy to create. We can even do it from any OS, using UNetBootin, LiLi USB Creator or some other methods. Straight forward. Some recommend installing it on a pen drive. But that requires a lot of space and, I believe, it won't behave as good as something meant to be installed on a usb disk, because of memory management. So, there are only a few negative points on using the installation image that I can think of. Question here, is how to remove those drawbacks: Having to press "Try Ubuntu". That's the big one. Couldn't find how. Unable to load everything on memory and keep on running without the pen drive (like this) Unable to remove "Install Ubuntu 12.04 LTS" app. Setting the ISO to use maximum amount of space for the OS will leave pen drive with zero space left and any file saved within it from ubuntu is inaccessible from the outside (when plugin the pen drive and not booting from it). Am I missing something? Can those points be fixed?

    Read the article

  • PHP frameworks - should I use them?

    - by 30secondstosam
    First of all let me explain who I am: I am a PHP Developer working for a company developing their CMS which handles online stores, data feeds and other content like blogs. I have been programming for 6 years and 4 of those have been in PHP. (I used to be a C# developer). My question is regarding PHP frameworks. I see so many jobs asking for zend, cakephp and other MVC types of frameworks. However, even as an experienced developer I have never used a PHP framework. Should I start learning? If so where do I even start as there are so many. I am about to re-write so much of my work's CMS and I'm wondering whether I could help myself a lot by using a framework. What do people think? Considering I will be re-writing a lot of the online store stuff... I am experienced in OO and I have used the .NET framework in the past. Thanks in advance for the replies.

    Read the article

  • 12.04 update crash, now cannot boot

    - by EdW
    Boot now freezes the system at the "Ubuntu purple screen" with the 5 dots under it. I cannot even CTRL-ALT-F1 to get to an alternate console. Have to push reset button on computer to reboot. After the 12.04 update crash, the system would boot, but I had no icons and no launch bar and errors about missing dependencies. CTRL-ALT-F1 and ran apt-get -f install and it appeared to get the missing packages. I followed with a apt-get update; apt-get upgrade. It appeared to install all of the remaining 12.04 update packages. Rebooted to same blank screen - no icons, but got a compiz error. In console I tried "unity --reset" rebooted ... Nothing ... It's dead in the water. Tried to boot into Recovery Mode - black screen - I cannot even CTRL-ALT-Del - have to push reboot button. Fortunately, I have Mint on another partition and I can access the broken 12.04 partition. Ideas? I can back up all of my folders using Mint, but I would hate to have to do a new install.

    Read the article

  • Dropbox Doubles Referral Credit; Score 500MB for Each Friend You Refer

    - by Jason Fitzpatrick
    Dropbox is doubling the amount of free storage you get per-referral to 500MB, doubling the previous 250MB credit–better yet, the bonus is retroactive and applies to referrals you’ve already made. From the DropBox blog: How much space is that, exactly? For every friend you invite that installs Dropbox, you’ll both get 500 MB of free space. If you’ve got a free account, you can invite up to 32 people for a whopping total of 16 GB of extra space. Pro accounts now earn 1 GB per referral, for a total of 32 GB of extra space. Have you already invited a bunch of people? Don’t worry. Within a few days, you’ll get full credit for every referral that’s already been completed. Boom! Hit up the link below for the full announcement. Dropbox Referrals Now Twice As Nice [Dropbox] How to Sync Your Media Across Your Entire House with XBMC How to Own Your Own Website (Even If You Can’t Build One) Pt 2 How to Own Your Own Website (Even If You Can’t Build One) Pt 1

    Read the article

  • External monitor problems with Asus UX32VD in 12.04

    - by rsilva
    I've posted this video where I show one of the problems I'm facing with my UX32VD in regard of external monitors. As you can see as soon as I plugged the external monitor the laptop screen goes black and the external monitor displays a single color (color changes between red, green and blue). However this does not always happen. For instance if I reboot the laptop with the external monitor already plugged in I get to use it but not the laptop screen. In the Displays settings the laptop screen it's not even listed. Other times it just works. The to screens, without any problems. I tried with other to different external monitor and all this apparently random issues repeat, I even tried using HDMI cable instead. Same random behavior. This question seams to cover one of this issues however the answer did not helped me. I was forced to use the 3.5.2 kernel in order to use the Intel graphics card else the Gallium something was used. Also I've Bumblebee installed.

    Read the article

  • What stops HTML5 and JS apps to perform as good as native apps?

    - by Amogh Talpallikar
    From what I understand, HTML is a mark-up language, so is the content of XAML, XIB and whatever Android uses and other native UI development frameworks. JavaScript is a programming language used along with it to handle client side scripting which will include things like event handling, client side validations and anything else C#,Java,Objective-C or C++ do in various such frameworks. There are MVC/MVVM patterns available in form frameworks like Sencha's, Angular etc. We have localStorage in form of both sqlite and key-value store as other frameworks have and you have API specification for almost everything that it missing. Whenever a native UI frameworks has to render UI , it has to parse a similar the markup and render the UI. Question break-down What stops from doing the same in HTML and JS itself ? Instead of having a web-control or browser as a layer in between why can't HTML(along with CSS) and JS be made to perform the same way ? Even if there is a layer,so does .net runtime and JVM are in other cases where C++,C are not being used. So Lets take the case of Android, like Dalvik, why Can't Chromium be another option(along with dalvik and NDK) where HTML does what android markup does and JavaScript is used to do what Java does ? So the Question is, Even if current implementations aren't as good, but theoretically is it possible to get HTML5 based applications to work as other native apps specially on mobile ?

    Read the article

  • Webserver insists on opening "blog1.php" instead of "index.php"

    - by pepoluan
    I'm at my wits' end. I have just ripped out a website and in the process of rebuilding everything. Previously, the 'home page' of the website is a blog, with the address "www.mydomain.com/blog1.php". After exporting everything, I deleted the whole directory, and -- based on request -- immediately create a blog/ directory. The idea is to get the blog back up as soon as possible, and temporarily redirect people accessing www.mydomain.com to the blog. Accessing the blog via http://www.mydomain.com/blog/ works. So I put in an index.php file containing a (temporary) redirect to the blog's address. The problem: The server insists on opening blog1.php instead of index.php. Even after we deleted all the files (including .htaccess). And even putting in a new .htaccess file with the single line of DirectoryIndex index.php doesn't work. The server stubbornly wants blog1.php. Now, the server is actually a webhosting, so I have no actual access to it. I have to do my work via cPanel. Currently, I work around this issue by creating blog1.php; but I really want to know why the server does not revert to opening index.php. Did I perhaps miss some important settings in the byzantine cPanel menu page?

    Read the article

  • How do I increase the open files limit for a non-root user?

    - by iCode
    This is happening on Ubuntu Release 12.04 (precise) 64-bit Kernel Linux 3.2.0-25-virtual I'm trying to increase the number of open files allowed for a user. This is for an my ecplise java application where the current limit of 1024 is not enough. According to the posts I've found so far, I should be able to put lines into /etc/security/limits.conf like this; soft nofile 4096 hard nofile 4096 to increase the number of open files allowed for all users. But, that's not working for me, and I think the problem is not related to that file. For all users, the default limit is 1024, regardless of what is in /etc/security/limits.conf (I have been rebooting after changing that file) $ ulimit -n 1024 Now, despite the entries in /etc/security/limits.conf I can't increase that; $ ulimit -n 2048 -bash: ulimit: open files: cannot modify limit: Operation not permitted The weird part is that I can change the limit downwards, but can't change it upwards - even to go back to a number which is below the original limit; $ ulimit -n 800 $ ulimit -n 800 $ ulimit -n 900 -bash: ulimit: open files: cannot modify limit: Operation not permitted As root, I can change that limit to whatever I want, up or down. It doesn't even seem to care about the supposedly system-wide limit in /proc/sys/fs/file-max # cat /proc/sys/fs/file-max 188897 # ulimit -n 188898 # ulimit -n 188898 So far, I haven't found any way to increase the open files limit for a non-root user, and I really don't want to be running my application as root. How should I properly do this? I have looked at all the posted and tried the given options but no luck!

    Read the article

< Previous Page | 153 154 155 156 157 158 159 160 161 162 163 164  | Next Page >