Search Results

Search found 5749 results on 230 pages for 'miles away'.

Page 56/230 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • iCloud backup merges or overwrites?

    - by Joe McMahon
    The following happened today (it was six AM my time, so yeah, I was dumb and dropped stitches in this process): A friend had a problem with her iPhone and needed to reset it. Unfortunately she did the reset while connected to iTunes and the restore process kicked in. In my sleepy state, I told her to go ahead. She did, and restored the most recent local (iTunes) backup (from July last year - she doesn't back up often, as she has an Air which is pretty full). During setup on the phone, she was prompted to merge data with the iCloud copy, and did so. There was no "restore from iCloud" prompt. Obviously I should have made sure she was disconnected from iTunes before she did the reset, or had her set it up as a new device and then restored from iCloud, but water under the bridge now. (Side question: could I have had her disconnect and then restart the phone again and avoid this whole process?) The question is: was the "merge" that happened in this process a true merge, or a replace? Her passwords for Mail were wrong, since they were the old ones from the old backup. If she does the wipe data and restore from iCloud, will she get her old SMSes and calendar entries back? Or did the merge decide that the phone, despite it being "old" was right and therefore the SMSes, calendar entries, etc. were discarded? As a recovery option, I have a 4-day-old iTunes backup here from ~/Library/Application Support/MobileSync/Backup, but she and the phone are 3000 miles away, and it's 8GB, so I can't easily restore it for her. I do have the option of encrypting it and mailing it on a data stick if the iCloud backup is now toast. Should she try the wipe and restore from the cloud (after backing up locally), or should I just get the more-recent backup in the mail? My goal is to get everything (especially the SMSes) back to the most recent version possible.

    Read the article

  • Rendering only a part of the screen in high detail

    - by Bart van Heukelom
    If graphics are rendered for a large viewing angle (e.g. a very large TV or a VR headset), the viewer can't actually focus on the entire image, just a part of it. (Actually, this is the case for regular sized screens as well.) Combined with a way to track the viewer's eyes, you could theoretically exploit this and render the graphics away from the viewer's focus with progressively less details and resolution, gaining performance, without losing perceived quality. Are there any techniques for this available or under development today?

    Read the article

  • What can I do to encourage teams to lighten up? [closed]

    - by Rahul
    I work with a geographically distributed team (different timezones) with people from various cultures and background. Some of us have never met each other in person but we communicate with each other over phone, chat and email almost on an hourly basis. Most of our meetings and discussions are dead serious and boring. What's worse, any attempt at humor is not very well received because of cultural differences. I feel that we are all taking our work a bit too seriously. We don't shy away from painful arguments, nasty emails and heated discussions when things go wrong but never attempt to develop camaraderie or friendships in better times. I would like to know your experiences with such situations and what, if anything, did you do to lighten things up at workplace.

    Read the article

  • Global Day of Coderetreat

    - by Tori Wieldt
    From the coderetreat.org website: Coderetreat is a day-long, intensive practice event, focusing on the fundamentals of software development and design. By providing developers the opportunity to take part in focused practice away from the pressures of 'getting things done', the coderetreat format has proven itself to be a highly effective means of skill improvement. This year, the Global Day of Coderetreat is happening on December 8. It sounds cool and fun, and of course, Java Champions and Java developers around the world are involved. Here's a small sampling: Chennai, India São Paulo, Brazil Skopje, Macedonia Kraków, Poland You can go to http://globalday.coderetreat.org/  to look up events near you. It's a great opportunity to practice your craft. Here's a video from an event last year to get a flavor:

    Read the article

  • Faster, secure, protocol/code required for long-distance transfer.

    - by Chopper3
    I've ran into a problem and I'm looking for a new secure protocol/client/server that's faster over a 1Gb/s fibre link - let me tell you the story... I have a pair of redundant, diversely-routed, 1Gb/s links over a distance of around 250 miles or so (not dark fibre but a dedicated point to point link, not a mesh). At the 'client' end I have a HP DL380 G5 (2 x dual-core 2.66Ghz Xeon's, 4GB, Windows 2003EE 32-bit), at the 'server' end I have a HP BL460c G6 (2 x quad-core 2.53Ghz Xeons, 48GB, Oracle Linux 5.3 64-bit). I need to transfer around 500 x 2GB files per week from the client to the server machines per week - but the transfer NEEDS to be secure. Using both iPerf or regular FTP I can get ~80MB/s of transfer pretty consistently, which is great. Using WinSCP or Windows SFTP I can't seem to get more that ~3-4MB/s, at this point the server's CPU is 3% busy while CPU0 of the client goes to ~30% utilised. We've tried editing various TCP window sizes with little success. Both ends are connected to quite low-usage Cisco Cat6509's with Sup720's. I can replace the client machine with a newer machine and/or move it to Linux - but this will take time. Clearly these single-threaded secure Windows clients are introducing too much latency doing their encryption. So a few questions/thoughts; Are there any higher performing secure protocols or client software for Windows that I could try? I'm pretty protocol-gnostic so long as it'll work between Windows and Linux. Should I be using hardware to do the encryption, either in the client or the network parts? If so what would you recommend? I'm not convinced that just swapping the server would be that much faster, the CPU was only at 30% but then again that's higher than I'd have expected given the load - moving to Linux at the client end may be a better idea but would be quite disruptive. Am I missing a trick? Thanks in advance.

    Read the article

  • How to configure ubuntu for lightweight low-memory usage?

    - by augustin
    I just upgraded an old, secondary computer to the latest Kubuntu (10.10). It seems the effort was a bit too much for the hardware and one 512MB memory module died. I tried to take it away, clean the connectors, put it back several times, but to no avail. Until such a time I can find a second hand DDR memory module, I am left with a meagre 256MB RAM, which is below the official requirements (384MB) to run Kubuntu/KDE. Indeed: the computer constantly swaps the memory, making everything painfully slow. Since Kubuntu is already installed and I use it on all my computers (and I want to keep KDE for when I really need it), how can I configure ubuntu to squeeze out every bit of unnecessary memory usage? This is a secondary computer but still very useful. We use it mostly for web browsing. A "lightweight" tag is missing.

    Read the article

  • How can I change this isometric engine to make it so that you could distinguish between blocks that are on different planes?

    - by l5p4ngl312
    I have been working on an isometric minecraft-esque game engine for a strategy game I plan on making. As you can see, it really needs some sort of shading. It is difficult to distinguish between separate elevations when the camera is facing away from the slope because everything is the same shade. So my question is: can I shade just a specific section of a sprite? All of those blocks are just sprites, so if I shaded the entire image, it would shade the whole block. I am using LWJGL. Are there any other approaches to take? Heres a link to a screenshot from the engine: http://i44.tinypic.com/qxqlix.jpg

    Read the article

  • To mount NAS on a Laptop?

    - by deckoff
    So, I bought a NAS, which I configured successfully in /etc/fstab, on mu Kubuntu 10.10 Thinkpad x40. It works just fine when I am at home. A few days I went out with my laptop and the problem is, that when not at home, both suspend and hibernate functions seem forever to work. I commented out the entry on fstab and the laptop started to work as expected. I played with autofs, but it seems just dies at one moment and I cannot access anything. It works for some time, and then just goes off. Is there any consistent way, to make my laptop access the drive when at home and work OK when away? Probably a script that runs at startup, checks if the mount is there and mounts it if available... or a script that umount the drive at suspend|hibernate and loads it back at startup. Any useful ideas?

    Read the article

  • Cannot upgrade from Maverick

    - by Tideland
    When I attempt to upgrade the Update manager shows a warning message stating "Your ubuntu release is no longer supported." Okay... Click "close" it never goes away. Locks-up the update manager and game over. Next, try to update the software sources but can't do that either since it launches from the System Administer Software Sources and guess what? Needs sudo... But like all good linux software I'm sure the program is buried in a folder somewhere waiting for me to find it through the all powerful terminal program. Already commented out all Maverick Sources by hand but that didn't do the trick. Now what?

    Read the article

  • tty1 prompt before lightdm

    - by David Weldon
    After upgrading to 13.10, every time I boot I'm shown a login prompt (tty1) for ~30 seconds before lightdm automatically starts. Everything works fine after that. Any ideas on what I could try to fix/debug this? My /var/log/lightdm/x-0-greeter.log contains lines like the following: ** (at-spi2-registryd:1381): WARNING **: Failed to register client: GDBus.Error:org.freedesktop.DBus.Error.ServiceUnknown: The name org.gnome.SessionManager was not provided by any .service files ** (at-spi2-registryd:1381): WARNING **: Unable to register client with session manager WARNING: Failed to open sessions directory: Error opening directory '/usr/share/lightdm/sessions': No such file or directory ** Message: PID 1534 (we are 1534) sent signal 15, shutting down... ** (gnome-settings-daemon:1401): WARNING **: Name taken or bus went away - shutting down Searching for these errors results in a variety of bugs filed over the years. Maybe a clean install will fix this.

    Read the article

  • How to automatically mount a Windows shared folder on every boot up?

    - by Zabba
    I am able to access Windows' shared folder from Ubuntu 10.10 Nautilus like so: Type into the Location Bar : smb://box/projects Now, I can see the folder in Nautilus, create/read files in it. Also, on desktop I get a folder called "projects on box". But, that folder on the desktop goes away when I reboot. So, I thought that I can automount the Windows' shared projects folder by adding this to my fstab: //box/Projects /home/base/Projects smbfs rw,user,username=jack,password=www222,fmask=666,dmask=777 0 0 (base is my user name on Ubuntu) Now, I get a folder called "Projects" in my home folder after boot up, but it is empty (cannot see the same files that I can see in Nautilus). What's am I doing wrong? Some more detail: This is what I see of the Projects folder when I do ls -l in my home folder: ... drwxr-xr-x 2 root root 4096 2011-01-01 10:22 Projects drwxr-xr-x 2 base base 4096 2011-01-01 09:06 Public ... Note the two "roots". Is that somehow the problem?

    Read the article

  • What are some non-MS languages that can write xlsx (Excel 2007+) documents efficiently?

    - by Honus Wagner
    Unfortunately, Excel format is required for the project I am working on. I have no problems getting the data I need in objects and arrays, and currently PHPExcel is doing handling the document generation. It works, but it's slow and loopy. Was wondering if there is a more efficient server language to generate Excel documents (not CSVs). This is a pure Linux environment so I need to stay away from .NET. I am open to any programming language that does it cleanly and efficiently. Thanks.

    Read the article

  • Handle all authentication logic in database or code?

    - by Snuffleupagus
    We're starting a new(ish) project at work that has been handed off to me. A lot of the database sided stuff has been fleshed out, including some stored procedures. One of the stored procedures, for example, handles creation of a new user. All of the data is validated in the stored procedure (for example, password must be at least 8 characters long, must contain numbers, etc) and other things, such as hashing the password, is done in the database as well. Is it normal/right for everything to be handled in the stored procedure instead of the application itself? It's nice that any application can use the stored procedure and have the same validation, but the application should have a standard framework/API function that solves the same problem. I also feel like it takes away the data from the application and is going to be harder to maintain/add new features to.

    Read the article

  • Implementing a Custom Coherence PartitionAssignmentStrategy

    - by jpurdy
    A recent A-Team engagement required the development of a custom PartitionAssignmentStrategy (PAS). By way of background, a PAS is an implementation of a Java interface that controls how a Coherence partitioned cache service assigns partitions (primary and backup copies) across the available set of storage-enabled members. While seemingly straightforward, this is actually a very difficult problem to solve. Traditionally, Coherence used a distributed algorithm spread across the cache servers (and as of Coherence 3.7, this is still the default implementation). With the introduction of the PAS interface, the model of operation was changed so that the logic would run solely in the cache service senior member. Obviously, this makes the development of a custom PAS vastly less complex, and in practice does not introduce a significant single point of failure/bottleneck. Note that Coherence ships with a default PAS implementation but it is not used by default. Further, custom PAS implementations are uncommon (this engagement was the first custom implementation that we know of). The particular implementation mentioned above also faced challenges related to managing multiple backup copies but that won't be discussed here. There were a few challenges that arose during design and implementation: Naive algorithms had an unreasonable upper bound of computational cost. There was significant complexity associated with configurations where the member count varied significantly between physical machines. Most of the complexity of a PAS is related to rebalancing, not initial assignment (which is usually fairly simple). A custom PAS may need to solve several problems simultaneously, such as: Ensuring that each member has a similar number of primary and backup partitions (e.g. each member has the same number of primary and backup partitions) Ensuring that each member carries similar responsibility (e.g. the most heavily loaded member has no more than one partition more than the least loaded). Ensuring that each partition is on the same member as a corresponding local resource (e.g. for applications that use partitioning across message queues, to ensure that each partition is collocated with its corresponding message queue). Ensuring that a given member holds no more than a given number of partitions (e.g. no member has more than 10 partitions) Ensuring that backups are placed far enough away from the primaries (e.g. on a different physical machine or a different blade enclosure) Achieving the above goals while ensuring that partition movement is minimized. These objectives can be even more complicated when the topology of the cluster is irregular. For example, if multiple cluster members may exist on each physical machine, then clearly the possibility exists that at certain points (e.g. following a member failure), the number of members on each machine may vary, in certain cases significantly so. Consider the case where there are three physical machines, with 3, 3 and 9 members each (respectively). This introduces complexity since the backups for the 9 members on the the largest machine must be spread across the other 6 members (to ensure placement on different physical machines), preventing an even distribution. For any given problem like this, there are usually reasonable compromises available, but the key point is that objectives may conflict under extreme (but not at all unlikely) circumstances. The most obvious general purpose partition assignment algorithm (possibly the only general purpose one) is to define a scoring function for a given mapping of partitions to members, and then apply that function to each possible permutation, selecting the most optimal permutation. This would result in N! (factorial) evaluations of the scoring function. This is clearly impractical for all but the smallest values of N (e.g. a partition count in the single digits). It's difficult to prove that more efficient general purpose algorithms don't exist, but the key take away from this is that algorithms will tend to either have exorbitant worst case performance or may fail to find optimal solutions (or both) -- it is very important to be able to show that worst case performance is acceptable. This quickly leads to the conclusion that the problem must be further constrained, perhaps by limiting functionality or by using domain-specific optimizations. Unfortunately, it can be very difficult to design these more focused algorithms. In the specific case mentioned, we constrained the solution space to very small clusters (in terms of machine count) with small partition counts and supported exactly two backup copies, and accepted the fact that partition movement could potentially be significant (preferring to solve that issue through brute force). We then used the out-of-the-box PAS implementation as a fallback, delegating to it for configurations that were not supported by our algorithm. Our experience was that the PAS interface is quite usable, but there are intrinsic challenges to designing PAS implementations that should be very carefully evaluated before committing to that approach.

    Read the article

  • Balancing Player vs. Monsters: Level-Up Curves

    - by ashes999
    I've written a fair number of games that have RPG-like "levelling up," where the player gains experience for killing monsters/enemies, and eventually, reaches a new level, where their stats increase. How do you find a balance between player growth, monster strength, and difficulty? The extreme ends of this spectrum are: Player levels up really fast and blows away monsters without much effort Monsters are incredibly strong and even at low levels, are very difficult to beat I've also tried a strange situation of making enemies relative to players, i.e. an enemy will always be at 50% or 100% or 150% of player stats (thus requiring the player to use other techniques instead of brute strength to succeeed). But where's the balance, and how do you find it? Edit: For example, I am expecting to hear things like: Balance high instead of balance low (200 HP and 20 str is easier to balance than 20 HP and 2 str) Look at easiest vs. hardest monsters, and see what you have in terms of a range

    Read the article

  • January Winnipeg .NET User Group Event

    - by D'Arcy Lussier
    We’ve had some problems with the Winnipeg .NET UG website, but things are getting sorted out and the site should be back up very shortly. In the meantime, here’s info on our January event and how to register. This is also a Microsoft sponsored event, so we’ll have some great swag to give away. As always, pizza will be provided! When: Wednesday, January 26th Where: 17th Floor Conference Room, Richardson Building Session: Taking your Windows Phone Apps to the Next Level with Tombstoning Speaker: Tyler Doerksen, Imaginet Unlike previous versions of Windows Mobile, Windows Phone 7 does not allow 3rd party applications to run in the background. Because of this your application needs to react to various life cycle events to provide the user with a seamless experience. Luckily Silverlight isolated storage has your back. In this session learn about the app life cycle and what storage patterns you can use to keep your users happy. To register for this event, please visit our registration page here.

    Read the article

  • The White Screen of Death

    - by TATWORTH
    A few days ago I was browsing a particular commerical web site, the site crashed and I encountered the "White Screen of Death". The detailed dump showed me what the site was using:zendMySqlPHPMageMagentoBesides all this detailed information of use to a hacker, the copyright on Magento cited a date of 2009.  Does this means that out of date software was in use?There is a more basic point - in your site please ensure that fatal errors are trapped and redirected to a page that gives away no information useful to a hacker. I suggest also that you provide a means for an administrator to simulate an error to check the error handling.

    Read the article

  • How can a script detect if the user is idle

    - by josinalvo
    I want to check, inside a bash script (*), how much time the user of a X session has been idle The user himself does not have to be using bash, but just X. If the user just moved the mouse, for example, a good answer would be "idle for 0 seconds". If he has not touched the computer in 5 minutes, a good answer would be "idle for 300 seconds" The reason to not use xautolock straight away is to be able to implement some complex behavior. For example, if the user is idle for 10 minutes, try to suspend, if he is idle for more 5 minutes, shutoff (I know it sounds odd, but suspend does not always work here ...) (*)or could be another language.

    Read the article

  • Removing CrossOver Games from Unity Start panel

    - by Cássio Amary
    I'm new to Ubuntu and recently installed CrossOver Games Trial on Ubuntu 12.04, didn't like it, tried to uninstall it but simply refused to go away , so I just went on and deleted the folders in the File System. Problem is that now there are 4 icons left on the Unity Start Panel under "installed applications", and I don´t know how to remove it. Actually, could anyone tell me how to delete any shortcut that is created on the Start Panel? Is there a way to do it? Thank you very much!

    Read the article

  • How to access localhost remotely - Wordpress?

    - by Marcappuccino
    I have installed a LAMP stack (sudo tasksel install lamp-server) and wordpress (sudo apt-get install wordpress), but now, I would like to access my server remotely, to be used as a home fileserver. For example, my public ip is 82.16.xxx.xxx. Opening it in firefox with the suffix :8080 brings up my router config page..? Do I have to set up port forwarding? BTW. Accessing via localhost/wordpress/ works fine, I would just like to access my files while away from home.

    Read the article

  • Are Modern Computers Still Vulnerable to Damage via Magnets?

    - by Jason Fitzpatrick
    It’s such an oft repeated warning that it’s firmly embedded in nerd lore: bring a magnet anywhere near your precious computer and suffer the dire consequences. But is true? Is your computer one run in with a novelty magnet away from digital death? Today’s Question & Answer session comes to us courtesy of SuperUser—a subdivision of Stack Exchange, a community-drive grouping of Q&A web sites. How To Create a Customized Windows 7 Installation Disc With Integrated Updates How to Get Pro Features in Windows Home Versions with Third Party Tools HTG Explains: Is ReadyBoost Worth Using?

    Read the article

  • Disqus 2012 comments NOT being indexed by Google

    - by Buckers
    We run a high-traffic website at http://www.onedirection.net and we've been using Disqus throughout this year, initially to great effect. We accepted the upgrade to Disqus 2012 back in June, loving the increased user experience and the better community feel - albeit back to an Iframe again. However the fact we were specifically told that the comments are now being indexed by Google was great, and the dynamic nature of the iFrame suited our site (all our pages are cached, so by using Disqus the comments are updated straight away). However, it seems that the Disqus 2012 comments are not being indexed, and we've noticed an obvious fall in traffic over the last few months. Initially we didn't put this down to Disqus and focused on other issues (Google algorithm updates etc). But we're quickly coming down the reasoning that our pages now contain less indexable text, and we are getting less traffic because of this. We've tried emailing Disqus directly but they're very slow and don't seem keen to help. Any thoughts on this?

    Read the article

  • Windows gets progressively slower over time, why doesn't Ubuntu?

    - by William
    I, and many other previous Windows users notice that the computer seems to get progressively slower over time. I bought a leapfrog crammer only to find it installed process that sat there waiting for me to plug the crammer in so it could run the software. It took up three percent of the CPU twenty-four seven, seven day a week! This is one of the main reasons I left Windows. But, Ubuntu doesn't seem to slow down over time at all. Does Ubuntu allow programs to install background programs like the leapfrog crammer did to sit there like a leech and suck away at resources? Could someone explain why Windows tends to get slower over time, and is Ubuntu vulnrable to this too? Thanks for any help, this is puzzling me.

    Read the article

  • where does the discrepancy between \# in PS1 and n in !n come from?

    - by Cbhihe
    Something has been gnawing at me for a while now and I can't seem to find a relevant answer either in man pages or using your 'Don't be evil' search engine. My .bashrc has the following: shopt -s histappend HISTSIZE=100 HISTFILESIZE=0 # 200 previous value Putting HISTFILESIZE to 0 allows me to start with a clean history slate with each new term window. I find it practical in conjunction with using a prompt that contains \#, because when visualizing a previous command before recalling it with !n or !-p, one can just do: $ history | more to see its relevant "n" value In my case, usually the result of: $ \history | tail -1 | awk '{print $1}' # (I know this is an overkill, don't flame me) equals the expanded value of # in PS1 minus 1, which is how I like it to be at all times. But then, sometimes not. At times the expanded value of # sort of "runs away". It's incremented in such a a manner that it becomes than $(( $(\history | tail -1 | awk '{print $1}')+1 )) Any pointers, anyone?

    Read the article

  • Developing GLSL Shaders?

    - by skln
    I want to create shaders but I need a tool to create and see the visual result before I put them into my game. As to determine if there is something wrong with my game or if it's something with the shader I created. I've looked at some like Render Monkey and OpenGL Shader Designer from what I recall of Render Monkey it had a way to define your own attributes (now as "in" for vertex shaders = 330) easily though I can't remember to what extent. Shader Designer requires a plugin that I didn't even bother to look at creating cause it's an external process and plugin. Are there any tools out there that support a scripting language and I could easily provide specific input such as float movement = sin(elapsedTime()); and then define in float movement; in the vertex shader ? It'd be cool if anyone could share how they develop shaders, if they just code away and then plug it into their game hoping to get the result they wanted.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >