Search Results

Search found 12144 results on 486 pages for 'the old toxophilist'.

Page 48/486 | < Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >

  • Dual Boot Installing Ubuntu 12.04 with Windows 7 (64) on a non UEFI system

    - by Randnum
    I cannot seem to install the correct boot loader for a non-UEFI firmware system. I'm trying to install Ubuntu 12.04 and Windows 7 (64) which are technically compatible with GPT but for windows only if the firmware is UEFI enabled. My system uses the old BIOS system and does not support UEFI. Therefore, whenever I finish my Ubuntu install and try to install Windows I get a "cannot install to GPT partition type" error. Even if I use Gparted to format a special NTFS file format for windows it can't handle the GPT partition style because it doesn't have UEFI. But my ubuntu install always forces GPT during installation and never asks if I want to install the old BIOS style MBR instead. How do I resolve this? Both OS's will install fine on their own the problem is when I try to install the second OS it doesn't recognize any of the other's partitions and tries to rewrite it's own on top of the other. I've tried both OS's first and always run into the same problem. Since there is no way to make Windows recognize GPT without upgrading my Motherboard how do I tell Ubuntu to use the old BIOS MBR on install? Do I have to download a special Ubuntu with a specific grub version? or should I manaually configure my partition somehow to force it not to use GPT? Thank you,

    Read the article

  • How to help FGLRX detect a device

    - by user113416
    I have HD 4850 card, Ubuntu 12.10 and installed legacy drivers using makson96 ppa. The issue is, that FGLRX can not detect my device and loads vesa bios. I had the same problem on ubuntu 11.10, 12.04 versions. I want to manually help fglrx find a matching device to load as it shoudld do. It is interesting, why does fglrx search for a device in a PCI:0@1:0:1 Bus? in xorg.cof different bus is indicated: Section "Device" Identifier "aticonfig-Device[0]-0" Driver "fglrx" BusID "PCI:1:0:0" EndSection fglrxinfo display: :0.0 screen: 0 OpenGL vendor string: Advanced Micro Devices, Inc. OpenGL renderer string: ATI Radeon HD 4800 Series OpenGL version string: 3.3.11653 Compatibility Profile Context Here is a part of my xorg log: [ 3.846] (II) VESA: driver for VESA chipsets: vesa [ 3.846] (II) FBDEV: driver for framebuffer: fbdev [ 3.846] (++) using VT number 7 [ 3.846] (WW) Falling back to old probe method for fglrx [ 3.883] (II) Loading PCS database from /etc/ati/amdpcsdb [ 3.883] (--) Assigning device section with no busID to primary device [ 3.883] (--) Chipset Supported AMD Graphics Processor (0x9442) found [ 3.884] (WW) fglrx: No matching Device section for instance (BusID PCI:0@1:0:1) found [ 3.884] (II) AMD Video driver is running on a device belonging to a group targeted for this release [ 3.884] (II) AMD Video driver is signed [ 3.884] (II) fglrx(0): pEnt->device->identifier=0xb7791d8f [ 3.884] (WW) Falling back to old probe method for vesa [ 3.884] (WW) Falling back to old probe method for fbdev Thanks in advance.

    Read the article

  • Website migration is not working for all computers

    - by Shadowizoo
    We got 2 servers on same network, Server-A and Server-B. On Server-A (widows server 2003), we have IIS 5.2 and our website was hosted on it few month ago (about 7-8 months). We bought a new server, Server-B (Windows Server 2008) with IIS 7.5 and copied our old website on this new machine. On our router, we forward the port 80 to Server-B. The Server-A is still on because we need to access some old data by our old website. I would like to access it with it's internal Ip (192.168.1.xxx/mywebsite) On my Windows 7 computer, if I write www.example.com or example.com (without www.), I'm being redirected to Server-B and I can see our new interface. On some Windows Vista computer, example.com (without www.) redirect to Server-B, but if I write www.example.com, I'm still on Server-A. In our website code (on Server-B), we sometimes redirect with a "www." so this is causing some error because we are trying to access a webpage that exist on Server-B but not Server-A and because the www.example I compared 2 computers with Vista Home on them and Internet Options looks the same. I cannot figure why this is happening

    Read the article

  • Permissions issue: how can Apache access files in my Home directory?

    - by richzilla
    I know file permissions have been covered on here before, but im struggling to get my head around the concept for my scenario. I created the files on an old ubuntu installation. Ive copied the files into my new ubuntu installation and put them in my webroot. When i attempt to run the files (theyre PHP files) i get an error relating to permissions in an attempt to fix this, i assumed that they must still be owned by the previous owner, so i ran chown -R on the directory, with my username as an argument, in order to take ownership of all of the files in the directory. It should be noted that the usernames between new and old ubuntu installations were the same. When i attempt to run the files again, same problem: 500 error due to permissions problems. Can anyone tell me what other steps i should take? The webroot for my apache installation is inside my home folder. If i create new files in my webroot, they also work as expected, its only the old files that are causing the problem.

    Read the article

  • How to Downgrade Razor 3 and fix the issue that CSHTML not work in VS10,12 ?

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/how-to-downgrade-razor-3-and-fix-the-issue-that.aspxFew days ago I migrate a project to MVC 4 and suddenly I have seen that MVC project’s cshtml file is no longer working. The problem happen because my project is now based on Razor 3 RC and VS12 doesn’t even have support it. (Remember that VS team will ship support in VS update 4). My migration update it to Razor 3 (which is not related to MVC 4, MVC 4 used old version of Razor 2).   So how to fix the problem. Since VS update 4 in development and MVC 3 support exist in both old Version of VS (10,12) then better to migrate back our Razor to old version so we can use our project in VS 10 or 12. If your project have Razor 3 and it seem that Syntax highlighting doesn’t work for you then I suggest you to try this Nuget package https://www.nuget.org/packages/UpgradeMvc3ToMvc4 Remember that this will not be succeed. What you need to do is delete package folder in your project and now open the packages.config remove all entry of package now.   Now Run this command PM> Install-Package UpgradeMvc3ToMvc4 If this is failed then see what thing make error in console. simply remove the reference and try again. Now run it and see this will work.   After run this you will see that WebGrease Dll have a version number issue. Simply update it to version 1.5.2 and now you have ready your project to run it in .net 4. If you do bin deployment then you don’t need to have installed MVC 4 on server either. Remember that MVC 5 is based on .net 4.5 which simply means you can’t run it in VS10. until VS12 update 4 MVC 5 cshtml page will be work as simple html pages (syntax highlighting and intellisense). Thanks for read my post

    Read the article

  • Drive reporting incorrect free space

    - by Oli
    So I swapped my shiny SATA SSD for an even shinier PCI-E SSD. I run my core OS on the SSD because it's silly-fast. I did this on my old SSD so I created a new EXT4 partition and then just dded the data across (sorry I don't know the exact command I ran anymore) and after reinstalling grub, I booted onto the PCI-E SSD. At first glance everything had worked perfectly and things were running faster than ever. But then I noticed the free disk space on the new, larger drive: it was almost exactly the same as it was on the other disk... A disk that was half its size. So it looks as if I've copied the files across incorrectly and it's copied some of the filesystem metadata along with it. Tools like du and Disk Usage Analyzer come back with the correct figures. Things that look at the partition (and not the files) seem to think the drive is 120GB I've been using this drive for a week now so it's way out of sync with the old SSD so dumping the data and starting again isn't a job that fills me with joy but two questions: Is there a way to fix my filesystem so it knows what it's really on about? fsck e2fsck and badblocks all seem to be able to scan it without finding a problem with it. If I do plug my old SSD back in, copy the data off my PCI-E on to it and then copy it back onto a fresh filesystem (eg juggle the data around), what's the best way of doing that? I obviously want to keep all the permissions and softlinks where they are.

    Read the article

  • I bought a new battery and Ubuntu 12.10 isn't recognizing it

    - by user134403
    I inherited my mom's old laptop and installed Ubuntu over top of Vista, so now it is purely Ubuntu. Her old battery only lasted about 10 minutes, so it had to be plugged in all the time, which I didn't like. So I bought a battery, and inserted it in when it came. The laptop doesn't see it at at all- no battery icon whatsoever. I can pull it out and put the old battery in and it recognizes it. The new battery comes with a program (a .exe) to update the bios to accommodate the new battery... But don't have Windows anymore and Wine gives me an error when I try to run it, so I am at a loss of ideas. I thought of running a virtual machine of Windows to install it, or run a Windows To Go drive(a new feature of Windows 8), but I don't think those are good ideas as they may not work or permanently ruin something. I am not an extremely experienced user of Ubuntu/Linux, so I don't where to go from here. I'm running Ubuntu 12.10 on a Sony Vaio VGN-NR, if that helps. Also note that I just installed Ubuntu the other day and have nothing important on the pc yet, so I am not afraid to reinstall Ubuntu if it may help. Thanks!

    Read the article

  • Serialization problem

    - by Falcon eyes
    Hi Every body I have a problem and want help. I have created a phonebook application and it works fine after a awhile i liked to make an upgrade for my application and i started from scratch i didn't inherit it from my old class,and i successes too ,my request "I want to migrate my contacts from the old application to the new one" ,so i made an adapter class for this reason in my new application with the following code using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using System.Windows.Forms; using System.Runtime.Serialization; using System.Runtime.Serialization.Formatters.Binary; namespace PhoneBook { class Adapter { PhoneRecord PhRecord; //the new application object CTeleRecord TelRecord; //the old application object string fileName; public Adapter(string filename) { fileName = filename; } public void convert() { PhRecord = new PhoneRecord(); TelRecord = new CTeleRecord(); FileStream OpFileSt = new FileStream(fileName, FileMode.Open,FileAccess.Read); BinaryFormatter readBin = new BinaryFormatter(); for (; ; ) { try { TelRecord.ResetTheObject(); TelRecord = (CTeleRecord)readBin.Deserialize(OpFileSt); PhRecord.SetName = TelRecord.GetName; PhRecord.SetHomeNumber = TelRecord.GetHomeNumber; PhRecord.SetMobileNumber = TelRecord.GetMobileNumber; PhRecord.SetWorkNumber = TelRecord.GetWorkNumber; PhRecord.SetSpecialNumber = TelRecord.GetSpecialNumber; PhRecord.SetEmail = TelRecord.GetEmail; PhRecord.SetNotes = TelRecord.GetNotes; PhBookContainer.phBookItems.Add(PhRecord); } catch (IOException xxx) { MessageBox.Show(xxx.Message); } catch (ArgumentException tt) { MessageBox.Show(tt.Message); } //if end of file is reached catch (SerializationException x) { MessageBox.Show(x.Message + x.Source); break; } } OpFileSt.Close(); PhBookContainer.Save(@"d:\MyPhBook.pbf"); } } } the problem is when i try to read the file ctreated by my old application i receive serialization exception with this message "Unabel to find assembly 'PhoneBook,Version=1.0.0.0,Culture=neutral,PublicK eyToken=null" and the source of exceptionis mscorlib. when i read the same file with my old application(Which is the origin of the file)i have no problem and idon't know what to do to make my adapter class work.so can somebody help please.

    Read the article

  • Double Buffering for Game objects, what's a nice clean generic C++ way?

    - by Gary
    This is in C++. So, I'm starting from scratch writing a game engine for fun and learning from the ground up. One of the ideas I want to implement is to have game object state (a struct) be double-buffered. For instance, I can have subsystems updating the new game object data while a render thread is rendering from the old data by guaranteeing there is a consistent state stored within the game object (the data from last time). After rendering of old and updating of new is finished, I can swap buffers and do it again. Question is, what's a good forward-looking and generic OOP way to expose this to my classes while trying to hide implementation details as much as possible? Would like to know your thoughts and considerations. I was thinking operator overloading could be used, but how do I overload assign for a templated class's member within my buffer class? for instance, I think this is an example of what I want: doublebuffer<Vector3> data; data.x=5; //would write to the member x within the new buffer int a=data.x; //would read from the old buffer's x member data.x+=1; //I guess this shouldn't be allowed If this is possible, I could choose to enable or disable double-buffering structs without changing much code. This is what I was considering: template <class T> class doublebuffer{ T T1; T T2; T * current=T1; T * old=T2; public: doublebuffer(); ~doublebuffer(); void swap(); operator=()?... }; and a game object would be like this: struct MyObjectData{ int x; float afloat; } class MyObject: public Node { doublebuffer<MyObjectData> data; functions... } What I have right now is functions that return pointers to the old and new buffer, and I guess any classes that use them have to be aware of this. Is there a better way?

    Read the article

  • Hot to implement grails server-side-triggered dialog, or how to break out of update region after AJA

    - by werner5471
    In grails, I use the mechanism below in order to implement what I'd call a conditional server-side-triggered dialog: When a form is submitted, data must first be processed by a controller. Based on the outcome, there must either be a) a modal Yes/No confirmation in front of the "old" screen or b) a redirect to a new controller/view replacing the "old" screen (no confirmation required). So here's my current approach: In the originating view, I have a <g:formRemote name="requestForm" url="[controller:'test', action:'testRequest']", update:"dummyRegion"> and a <span id="dummyRegion"> which is hidden by CSS When submitting the form, the test controller checks if a confirmation is necessary and if so, renders a template with a yui-based dialog including Yes No buttons in front of the old screen (which works fine because the dialog "comes from" the dummyRegion, not overwriting the page). When Yes is pressed, the right other controller & action is called and the old screen is replaced, if No is pressed, the dialog is cancelled and the "old" screen is shown again without the dialog. Works well until here. When submitting the form and test controller sees that NO confirmation is necessary, I would usually directly redirect to the right other controller & action. But the problem is that the corresponding view of that controller does not appear because it is rendered in the invisble dummyRegion as well. So I currently use a GSP template including a javascript redirect which I render instead. However a javascript redirect is often not allowed by the browser and I think it's not a clean solution. So (finally ;-) my question is: How do I get a controller redirect to cause the corresponding view to "break out" of my AJAX dummyRegion, replacing the whole screen again? Or: Do you have a better approach for what I have in mind? But please note that I cannot check on the client side whether the confirmation is necessary, there needs to be a server call! Also I'd like to avoid that the whole page has to be refreshed just for the confirmation dialog to pop up (which would also be possible without AJAX). Thanks for any hints!

    Read the article

  • From the Tips Box: Pre-installation Prep Work Makes Service Pack Upgrades Smoother

    - by Jason Fitzpatrick
    Last month Microsoft rolled out Windows 7 Service Pack 1 and, like many SP releases, quite a few people are hanging back to see what happens. If you want to update but still error on the side of caution, reader Ron Troy  offers a step-by-step guide. Ron’s cautious approach does an excellent job minimizing the number of issues that could crop up in a Service Pack upgrade by doing a thorough job updating your driver sets and clearing out old junk before you roll out the update. Read on to see how he does it: Just wanted to pass on a suggestion for people worried about installing Service Packs.  I came up with a ‘method’ a couple years back that seems to work well. Run Windows / Microsoft Update to get all updates EXCEPT the Service Pack. Use Secunia PSI to find any other updates you need. Use CCleaner or the Windows disk cleanup tools to get rid of all the old garbage out there.  Make sure that you include old system updates. Obviously, back up anything you really care about.  An image backup can be real nice to have if things go wrong. Download the correct SP version from Microsoft.com; do not use Windows / Microsoft Update to get it.  Make sure you have the 64 bit version if that’s what you have installed on your PC. Make sure that EVERYTHING that affects the OS is up to date.  That includes all sorts of drivers, starting with video and audio.  And if you have an Intel chipset, use the Intel Driver Utility to update those drivers.  It’s very quick and easy.  For the video and audio drivers, some can be updated by Intel, some by utilities on the vendor web sites, and some you just have to figure out yourself.  But don’t be lazy here; old drivers and Windows Service Packs are a poor mix. If you have 3rd party software, check to see if they have any updates for you.  They might not say that they are for the Service Pack but you cut your risk of things not working if you do this. Shut off the Antivirus software (especially if 3rd party). Reboot, hitting F8 to get the SafeMode menu.  Choose SafeMode with Networking. Log into the Administrator account to ensure that you have the right to install the SP. Run the SP.  It won’t be very fancy this way.  Maybe 45 minutes later it will reboot and then finish configuring itself, finally letting you log in. Total installation time on most of my PC’s was about 1 hour but that followed hours of preparation on each. On a separate note, I recently got on the Nvidia web site and their utility told me I had a new driver available for my GeForce 8600M GS.  This laptop had come with Vista, now has Win 7 SP1.  I had a big surprise from this driver update; the Windows Experience Score on the graphics side went way up.  Kudo’s to Nvidia for doing a driver update that actually helps day to day usage.  And unlike ATI’s updates (which I need for my AGP based system), this update was fairly quick and very easy.  Also, Nvidia drivers have never, as I can recall, given me BSOD’s, many of which I’ve gotten from ATI (TDR errors).How to Enable Google Chrome’s Secret Gold IconHTG Explains: What’s the Difference Between the Windows 7 HomeGroups and XP-style Networking?Internet Explorer 9 Released: Here’s What You Need To Know

    Read the article

  • Make the Time

    - by WonderOfItAll
    Took the little one to the pool tonight for swim lessons. Okay, Okay. They're not really lessons so much as they are "Hey, here's a few bucks, let me rent out a small section of your pool to swim around with my little one" Saw a dad at the pool. Bluetooth on, iPad in hand, and two year old somewhere around there. Saw a mom at the pool. Arguing with her five year old to NOT take a shower after swimming. Bluetooth on, iPad in hand, work laptop open on stadium seats. Her reasoning for not wanting the child to shower "Look, I have to get this stuff to the office by 6:30, we don't have time for you to shower. Let's go" Wait, isn't the whole point of this little experience called Mommy and Me (or, as in my case, Daddy and Me). Wherein Mommy/Daddy is supposed to spend time with little one. Not with the Bluetooth. Not with the work laptop. Dad (yeah, the same dad from earlier), in the pool. Bluetooth off (it's not waterproof or I'm sure he would've had it on), two year old in hand and iPad somewhere put away. Getting frustrated with kid because he won't 'perform' on command. Here's a little exchange Kid: "I don't wanna get in the water" Dad: "Well, we're here for 30 minutes, get in the water" Kid: "No, don't wanna" Dad: "Fine, I'm getting in" and, true to his word, in he goes, off to swim. Kid: Crying Dad: "Well, c'mon" Kid: Walking to stands Dad: Ignoring kid Kid: At stands Dad: Out of pool, drying off. Frustrated. Grabs bag, grabs kid, leaves How sad. It really seems like I am living in a generation of parents who view their children as one big scheduled distraction to another. It's almost like the dad was saying "Look, little 2 year old boy, I have a busy scheduled. Right now my Outlook Calendar tells me that I have 30 mins to spend with you, so, let's go kid: PERFORM because I have the time" Really? Can someone please tell me when the hell this happened? When did spending time with your kid, spending time with your family, spending time with your spouse, etc... become a distraction? I've seen people at work all day Tweeting throughout the day, checked in with Four Square, IM up and running constantly so they can 'stay in touch' only to see these same folks come home and be irritated because their kids or their spouse wants to connect with the. I've seen these very same people leave the house, go to the corner bar/store/you-name-the-place to be 'alone' only to find them there, plugged in, tweeting away, etc, etc, etc I LOVE technology. I love working with technology. But I also know that I am a human being. A person who, by very definition, is a social being. I needed social interactions and contact--and, no, I'm not talking about the Social Graph kind of connections, I'm talking about those interactions which, *GASP* involve eye to eye contact and human contact. A recent study found that the number one complaint of kids is that they feel they have to compete with technology for their parents time and attention. The number one wish from high school kids? That there parents would turn off the computer/tv/cell phone at dinner. This, coming from high school kids. Shouldn't that tell you a whole helluva lot? So, do yourself a favor tomorrow. Plug into technology all day. Throw yourself into it. Be passionate about what you do. When you walk through the door to your family, turn it all off for 30 mins and be there with your loved ones. If you can manage to play Angry Birds, I'm sure you can handle being disconnected for 30 minutes. Make the time

    Read the article

  • MySQL Server 5.6 defaults changes

    - by user12626240
    We're improving the MySQL Server defaults, as announced by Tomas Ulin at MySQL Connect. Here's what we're changing:  Setting  Old  New  Notes back_log  50  50 + ( max_connections / 5 ) capped at 900 binlog_checksum  off  CRC32  New variable in 5.6 binlog_row_event_max_size  1k  8k flush_time  1800  Windows changes from 1800 to 0  Was already 0 on other platforms host_cache_size  128  128 + 1 for each of the first 500 max_connections + 1 for every 20 max_connections over 500, capped at 2000  New variable in 5.6 innodb_autoextend_increment  8  64  Now affects *.ibd files. 64 is 64 megabytes innodb_buffer_pool_instances  0  8. On 32 bit Windows only, if innodb_buffer_pool_size is greater than 1300M, default is innodb_buffer_pool_size / 128M innodb_concurrency_tickets  500  5000 innodb_file_per_table  off  on innodb_log_file_size  5M  48M  InnoDB will always change size to match my.cnf value. Also see innodb_log_compressed_pages and binlog_row_image innodb_old_blocks_time 0  1000 1 second innodb_open_files  300  300; if innodb_file_per_table is ON, higher of table_open_cache or 300 innodb_purge_batch_size  20  300 innodb_purge_threads  0  1 innodb_stats_on_metadata  on  off join_buffer_size 128k  256k max_allowed_packet  1M  4M max_connect_errors  10  100 open_files_limit  0  5000  See note 1 query_cache_size  0  1M query_cache_type  on/1  off/0 sort_buffer_size  2M  256k sql_mode  none  NO_ENGINE_SUBSTITUTION  See later post about default my.cnf for STRICT_TRANS_TABLES sync_master_info  0  10000  Recommend: master_info_repository=table sync_relay_log  0  10000 sync_relay_log_info  0  10000  Recommend: relay_log_info_repository=table. Also see Replication Relay and Status Logs table_definition_cache  400  400 + table_open_cache / 2, capped at 2000 table_open_cache  400  2000   Also see table_open_cache_instances thread_cache_size  0  8 + max_connections/100, capped at 100 Note 1: In 5.5 there was already a rule to make open_files_limit 10 + max_connections + table_cache_size * 2 if that was higher than the user-specified value. Now uses the higher of that and (5000 or what you specify). We are also adding a new default my.cnf file and guided instructions on the key settings to adjust. More on this in a later post. We're also providing a page with suggestions for settings to improve backwards compatibility. The old example files like my-huge.cnf are obsolete. Some of the improvements are present from 5.6.6 and the rest are coming. These are ideas, and until they are in an official GA release, they are subject to change. As part of this work I reviewed every old server setting plus many hundreds of emails of feedback and testing results from inside and outside Oracle's MySQL Support team and the many excellent blog entries and comments from others over the years, including from many MySQL Gurus out there, like Baron, Sheeri, Ronald, Schlomi, Giuseppe and Mark Callaghan. With these changes we're trying to make it easier to set up the server by adjusting only a few settings that will cause others to be set. This happens only at server startup and only applies to variables where you haven't set a value. You'll see a similar approach used for the Performance Schema. The Gurus don't need this but for many newcomers the defaults will be very useful. Possibly the most unusual change is the way we vary the setting for innodb_buffer_pool_instances for 32-bit Windows. This is because we've found that DLLs with specified load addresses often fragment the limited four gigabyte 32-bit address space and make it impossible to allocate more than about 1300 megabytes of contiguous address space for the InnoDB buffer pool. The smaller requests for many pools are more likely to succeed. If you change the value of innodb_log_file_size in my.cnf you will see a message like this in the error log file at the next restart, instead of the old error message: [Warning] InnoDB: Resizing redo log from 2*64 to 5*128 pages, LSN=5735153 One of the biggest challenges for the defaults is the millions of installations on a huge range of systems, from point of sale terminals and routers though shared hosting or end user systems and on to major servers with lots of CPU cores, hundreds of gigabytes of RAM and terabytes of fast disk space. Our past defaults were for the smaller systems and these change that to larger shared hosting or shared end user systems, still with a bias towards the smaller end. There is a bias in favour of OLTP workloads, so reporting systems may need more changes. Where there is a conflict between the best settings for benchmarks and normal use, we've favoured production, not benchmarks. We're very interested in your feedback, comments and suggestions.

    Read the article

  • Is Visual Source Safe (The latest Version) really that bad? Why? What's the Best Alternative? Why? [closed]

    - by hanzolo
    Over the years I've constantly heard horror stories, had people say "Real Programmers Dont Use VSS", and so on. BUT, then in the workplace I've worked at two companies, one, a very well known public facing high traffic website, and another high end Financial Services "Web-Based" hosted solution catering to some very large, very well known companies, which is where I currently Reside and everything's working just fine (KNOCK KNOCK!!). I'm constantly interfacing with EXTREMELY Old technology with some of these financial institutions.. OLD LIKE YOU WOULDN'T BELIEVE.. which leads me to the conclusion that if it works "LEAVE IT", and that maybe there's some value in old technology? at least enough value to overrule a rewrite!? right?? Is there something fundamentally flawed with the underlying technology that VSS uses? I have a feeling that if i said "someone said VSS Sucks" they would beg to differ, most likely give me this look like i dont know -ish, and I'd never gain back their respect and my credibility (well, that'll be hard to blow.. lol), BUT, give me an argument that I can take to someone whose been coding for 30 years, that builds Platforms that leverage current technology (.NET 3.5 / SQL 2008 R2 ), write's their own ORM with scaffolding and is able to provide a quality platform that supports thousands of concurrent users on a multi-tenant hosted solution, and does not agree with any benefits from having Source Control Integrated, and yet uses the Infamous Visual Source Safe. I have extensive experience with TFS up to 2010, and honestly I think it's great when a team (beyond developers) can embrace it. I've worked side by side with someone whose a die hard SVN'r and from a purist standpoint, I see the beauty in it (I need a bit more, out of my SS, but it surely suffices). So, why are such smarties not running away from Visual Source Safe? surely if it was so bad, it would've have been realized by now, and I would not be sitting here with this simple old, Check In, Check Out, Version Resistant, Label Intensive system. But here I am... I would love to drop an argument that would be the end all argument, but if it's a matter of opinion and personal experience, there seems to be too much leeway for keeping VSS. UPDATE: I guess the best case is to have the VSS supporters check other people's experiences and draw from that until we (please no) experience the breaking factor ourselves. Until then, i wont be engaging in a discussion to migrate off of VSS.. UPDATE 11-2012: So i was able to convince everyone at my work place that since MS is sun downing Visual Source Safe it might be time to migrate over to TFS. I was able to convince them and have recently upgraded our team to Visual Studio 2012 and TFS 2012. The migration was fairly painless, had to run analyze.exe which found a bunch of errors (not sure they'll ever affect the project) and then manually run the VSSConverter.exe. Again, painless, except it took 16 hours to migrate 5 years worth of everything.. and now we're on TFS.. much more integrated.. much more cooler.. so all in all, VSS served it's purpose for years without hick-up. There were no horror stories and Visual Source Save as source control worked just fine. so to all the nay sayers (me included). there's nothing wrong with using VSS. i wouldnt start a new project with it, and i would definitely consider migrating to TFS. (it's really not super difficult and a new "wizard" type converter is due out any day now so migrating should be painless). But from my experience, it worked just fine and got the job done.

    Read the article

  • Using a Visual Sourcesafe 2005 database with VB6 still launches VSS 6.0d

    - by John Galt
    I know all versions of VSS have many horror stories and I feel I will escape to a better source control mechanism someday but in the short term I am just trying to do a little cleanup and would like your advice on this issue: Objective - consolidate old VB6 source code in a "new" VSS 2005 database (currently all these old projects are checked in to an "old" VSS 6.0d database); eventually, eliminate the "old" VSS. Progress so far - The new VSS 2005 database now contains a mixture of projects. Some are using Visual Studio 2008, some use Vstudio 2005, and the more recently added ones are the above mentioned VB6 projects. Individually all these projects and "solutions" come up OK, I can check in - check out, launch SourceSafe, view differences, etc. But all the VB6 projects now in a VSS 2005 database launch VSS 6.0d when asked, rather than VSS 2005. Is this normal and just something to cope with until I get to some better nonVSS approach? Or can VB6 be re-configured someway to launch VSS 2005 when I click Tools-SourceSafe-Run SourceSafe? I seem to recall VSS 6.0d got "integrated" into VB6 by way of the "Add-In Manager". At this point, the development PC with VB6 installed has both VSS 2005 and VSS 6.0d clients installed.

    Read the article

  • Mercurial: a few questions all related to .hgignore

    - by WizardOfOdds
    I've been working for a long time with a .hgignore file that was fine and recently added one new type of files to ignore. When running "hg status", I noticed this: M .hgignore So Mercurial considers the .hgignore to be a file that needs to be tracked (if it's a the root of the project). Now I've read various docs but my points weren't specifically adressed so here are some very detailed questions which hopefully can help me figure this out (it would be great is someone answering could quote and address these three points [even with a simply yes/no answer for each question]): Should .hgignore be at the root of the project? (I guess it should, seen that a developer can potentially be working on several projects which would all have different .hgignore requirements) Can .hgignore be ignored be Mercurial? If it can be ignored, should .hgignore be ignored by Mercurial (which is different than the previous question) In the case where .hgignore should not be ignored, can't some really bad thing happens if you suddenly rollback way earlier, when a really old and incomplete .hgignore was used? I think I saw weird things happening with certain per-user IDE project files (I'm not saying all IDEs project files are per-user only, but some definitely are) that were supposed to be ignored, but then the user rolls back to an old version, where an old .hgignore gets used, and then suddenly files supposed to be ignored are committed because the old .hgignore didn't exclude these.

    Read the article

  • iPhone 4.0 on iPhone but still Ad-Hoc compile for 3.1.3?

    - by Mark
    Device: Version : 3.1 Build: 3511 Device: iPhone OS: iPhone OS 4.0 xCode 3.2.2 (Old) xCode 3.2.3 (New; For iPhone 4.0 Beta) Background: As you can see I installed 4.0 on my iPhone as I read on this forum it's really hard to near impossible to downgrade back to 3.1.3, but it's my only device I have and use for development. When I try to continue to develop and build with the old xCode it tells me that "No provisioned iPhone OS device is connected". When I select Simulator it does compile and build, however when I spread this file it does not work on the devices of my testers, they get a Signed error. When I run the new xCode, it does compile and build on the Device and when I spread this file, it does work on the devices of my testers (which are running the current official version 3.1.3). Questions: Why is there a difference between building for Simulator and Device? A simulator build never seem to work on the devices of my testers because of signing issues and the build for device does work. Currently it seems the old xCode became useless, however I read that you may not use the Beta xCode to build your application for release. So knowing the above how am I able to pull this off with my current setup due the fact the old xCode won't let me build properly.

    Read the article

  • Svn repository split problem

    - by Tuminoid
    I want to split a directory from a large Subversion repository to a repository of its own, and keep the history of the files in that directory. I tried the regular way of doing it first svnadmin dump /path/to/repo > largerepo.dump cat largerepo.dump | svndumpfilter include my/directory >mydir.dump but that does not work, since the directory has been moved and copied over the years and files have been moved into and out of it to other parts of the repository. The result is a lot of these: svndumpfilter: Invalid copy source path '/some/old/path' Next thing I tried is to include those /some/old/path as they appear and after a long, long list of files and directories included, the svndumpfilter completes, BUT importing the resulting dump isn't producing the same files as the current directory has. So, how do I properly split the directory from that repository while keeping the history? EDIT: I specifically want trunk/myproj to be the trunk in a new repository PLUS have the new repository include none of the other old stuff, ie. there should not be possibility for anyone to update to old revision before the split and get/see the files. The svndumpfilter solution I tried would achieve exactly that, sadly its not doable since the path/files have been moved around. The solution by ng isn't accetable since its basically a clone+removal of extras which keeps ALL the history, not just relevant myproj history. BUMP C'moon, there must be someone who definitely knows if this is doable or not, and how!

    Read the article

  • Solution for cleaning an image cache directory on the SD card

    - by synic
    I've got an app that is heavily based on remote images. They are usually displayed alongside some data in a ListView. A lot of these images are new, and a lot of the old ones will never be seen again. I'm currently storing all of these images on the SD card in a custom cache directory (ala evancharlton's magnatune app). I noticed that after about 10 days, the directory totals ~30MB. This is quite a bit more than I expected, and it leads me to believe that I need to come up with a good solution for cleaning out old files... and I just can't think of a great one. Maybe you can help. These are the ideas that I've had: Delete old files. When the app starts, start a background thread, and delete all files older than X days. This seems to pose a problem, though, in that, if the user actively uses the app, this could make the device sluggish if there are hundreds of files to delete. After creating the files on the SD card, call new File("/path/to/file").deleteOnExit(); This will cause all files to be deleted when the VM exits (I don't even know if this method works on Android). This is acceptable, because, even though the files need to be cached for the session, they don't need to be cached for the next session. It seems like this will also slow the device down if there are a lot of files to be deleted when the VM exits. Delete old files, up to a max number of files. Same as #1, but only delete N number of files at a time. I don't really like this idea, and if the user was very active, it may never be able to catch up and keep the cache directory clean. That's about all I've got. Any suggestions would be appreciated.

    Read the article

  • Display a input type=file over another input type=file

    - by Kevin Sedgley
    WARNING: Lengthy description coming up! I have written an uploader based upon APC progress uploader for PHP. This works fine and dandy, but the script as a whole (apc etc) is intended to be used only for those with Javascript. To achieve this, I have searched for any input type=file, and replaced these with an absolutely positioned form that appears over the original area where the old file input area was. The reasons for this are so the new uploader can submit to a hidden in page IFrame has to be in a seperate <form> in order to submit to the APC reciever to display the progress upload bar. allows it to be used within any form with an input type=file throughout the site I have used JQuery to do this, with the following code: Original HTML form code: <div><input type="file" name="media" id="media" /></div> Find position of div block code: // get the parent div, and properties thereof parentDiv = $(this).closest('div'); w = $(parentDiv).width(); h = $(parentDiv).height(); loc = $(parentDiv).offset(); Locate new block over old block: $('#_sender').appendTo('body').css({left:loc.left,top:loc.top,position:'absolute',zIndex:400,height:h,width:w}).show(); This works fine, and shows over the old block OK. The problem: When other elements in the DOM before or above it change (in this case a "tree view" selector is pushing the old block down) the new upload form gets moved over other elements. Is there a JQuery (or JS) method for changing this upon DOM change? Some kind of .onchange for the page?! Or an .onmove for the original block? Thanks in advance you lovely people Before DOM change: . After: .

    Read the article

  • Which cast am I using?

    - by Knowing me knowing you
    I'm trying to cast away const from an object but it doesn't work. But if I use old C-way of casting code compiles. So which casting I'm suppose to use to achieve this same effect? I wouldn't like to cast the old way. //file IntSet.h #include "stdafx.h" #pragma once /*Class representing set of integers*/ template<class T> class IntSet { private: T** myData_; std::size_t mySize_; std::size_t myIndex_; public: #pragma region ctor/dtor explicit IntSet(); virtual ~IntSet(); #pragma endregion #pragma region publicInterface IntSet makeUnion(const IntSet&)const; IntSet makeIntersection(const IntSet&)const; IntSet makeSymmetricDifference(const IntSet&)const; void insert(const T&); #pragma endregion }; //file IntSet_impl.h #include "StdAfx.h" #include "IntSet.h" #pragma region ctor/dtor template<class T> IntSet<T>::IntSet():myData_(nullptr), mySize_(0), myIndex_(0) { } IntSet<T>::~IntSet() { } #pragma endregion #pragma region publicInterface template<class T> void IntSet<T>::insert(const T& obj) { /*Check if we are initialized*/ if (mySize_ == 0) { mySize_ = 1; myData_ = new T*[mySize_]; } /*Check if we have place to insert obj in.*/ if (myIndex_ < mySize_) {//IS IT SAFE TO INCREMENT myIndex while assigning? myData_[myIndex_++] = &T(obj);//IF I DO IT THE OLD WAY IT WORKS return; } /*We didn't have enough place...*/ T** tmp = new T*[mySize_];//for copying old to temporary basket std::copy(&myData_[0],&myData_[mySize_],&tmp[0]); } #pragma endregion Thanks.

    Read the article

  • Java assignment issues - Is this atomic?

    - by Bob
    Hi, I've got some questions about Java's assigment. Strings I've got a class: public class Test { private String s; public synchronized void setS(String str){ s = s + " - " + str; } public String getS(){ return s; } } I'm using "synchronized" in my setter, and avoiding it in my getter, because in my app, there are a tons of data gettings, and very few settings. Settings must be synchronized to avoid inconsistency. My question is: is getting and setting a variable atomic? I mean, in a multithreaded environment, Thread1 is about to set variable s, while Thread2 is about to get "s". Is there any way the getter method could get something different than the s's old value or the s's new value (suppose we've got only two threads)? In my app it is not a problem to get the new value, and it is not a problem to get the old one. But could I get something else? What about HashMap's getting and putting? considering this: public class Test { private Map<Integer, String> map = Collections.synchronizedMap(new HashMap<Integer, String>()); public synchronized void setMapElement(Integer key, String value){ map.put(key, value); } public String getValue(Integer key){ return map.get(key); } } Is putting and getting atomic? How does HashMap handle putting an element into it? Does it first remove the old value and put the now one? Could I get other than the old value or the new value? Thanks in advance!

    Read the article

  • jquery: mouseleave event seems to fire when it's not supposed to

    - by Jeremy
    Given the following html table and script shown below I am having a problem where the mouse leave event appears to fire right after the mouse enter, even if I don't move the mouse out of the row. <script type="text/javascript" language="javascript"> function highlightRows(iMainID) { $('tr[mainid=' + iMainID+ ']').each(function() { if ($(this).attr('old') == undefined) { $(this).attr('old', $(this).css('backgroundColor')); } $(this).animate({ backgroundColor: "#FFFFCC" }, 500); $(this).mouseout(function() { if ($(this).attr('old') != undefined) { $(this).animate({ backgroundColor: $(this).attr('old') }, 500); } }); }); } </script> <table> <tr> <td mainid="1" onmouseover='highlightRows(1)'><div>text</div></td> <td mainid="1" onmouseover='highlightRows(1)'><div>text</div></td> <td mainid="2" onmouseover='highlightRows(2)'><div>text</div></td> </tr> <table>

    Read the article

  • JS: capture a static snapshot of an object at a point in time with a method

    - by Barney
    I have a JS object I use to store DOM info for easy reference in an elaborate GUI. It starts like this: var dom = { m:{ old:{}, page:{x:0,y:0}, view:{x:0,y:0}, update:function(){ this.old = this; this.page.x = $(window).width(); this.page.y = $(window).height(); this.view.x = $(document).width(); this.view.y = window.innerHeight || $(window).height(); } I call the function on window resize: $(window).resize(function(){dom.m.update()}); The problem is with dom.m.old. I would have thought that by calling it in the dom.m.update() method before the new values for the other properties are assigned, at any point in time dom.m.old would contain a snapshot of the dom.m object as of the last update – but instead, it's always identical to dom.m. I've just got a pointless recursion method. Why isn't this working? How can I get a static snapshot of the object that won't update without being specifically told to? Comments explaining how I shouldn't even want to be doing anything remotely like this in the first place are very welcome :)

    Read the article

  • question about MySQL database migration

    - by WilliamLou
    Hi there: If I have a MySQL database with several tables on a live server, now I would like to migrate this database to another server. Of course, the migration I mean here involves some database tables, for example: add some new columns to several tables, add some new tables etc.. Now, the only method I can think of is to use some php/python(two scripts I know) script, connect two databases, dump the data from the old database, and then write into the new database. However, this method is not efficient at all. For example: in old database, table A has 28 columns; in new database, table A has 29 columns, but the extra column will have default value 0 for all the old rows. My script still needs to dump the data row by row and insert each row into the new database. Is there any tools or a better method than writing a script yourself? Here, I dont need to worry about multithread writing problems etc.., I mean the old database will be down (not open to public usage etc.., only for upgrade ) for a while. Thanks!!

    Read the article

< Previous Page | 44 45 46 47 48 49 50 51 52 53 54 55  | Next Page >