Search Results

Search found 8048 results on 322 pages for 'partial upgrade'.

Page 102/322 | < Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >

  • What to do about unreadable grub screen

    - by stevecoh1
    I have been upgrading my Ubuntu, from 10.04 to 12.04 to 12.10 to 13.04 in the past few days with varying degrees of success. One problem that has been constant through every step of the upgrade, since 12.04 is that a portion of the text on the grub screen is off the screen to the left so I can't completely make out what the options are. As I am having other troubles with the upgrade I would like to at least be able to see what options there are to me at boot time. Is there some sort of grub configuration that can handle this?

    Read the article

  • How to debug slow session start of Gnome 3?

    - by user65521
    After Upgrade from 11.10 to 12.04, the login process of Gnome 3 is extremely slow (It takes in the order of 60 seconds when it was in the order of a few seconds before the upgrade (Harddisk is a SSD!)). Running "top" in a VT shows that gnome-shell is producing about 90% CPU load while dbus-daemon is taking roughly 10%. The moment when CPU-load of gnome-shell drops to normal levels (around 2-3%) corresponds to the time the login process is terminated and the desktop is displayed. De-activating the four gnome-shell extensions (Alternative Status Menu, Quit Button, Remove Accessibility, system-monitor) that I have installed does not have any effect on session start up time. Login to Gnome classic does not show the slow session start. The system logs do not show anything suspicious. Thus, what is the best way to identify the underlying problem?

    Read the article

  • Wireless connectivity Wifi 7260 on Ubuntu 10.04

    - by user292332
    I am trying to install Wireless networking on Dell Latitude E5440 Wifi 7260- Ubuntu 10.04.(Yes I know it is old- see the complication below) I did get the wired connection working but our classrooms are wireless. I have tried to upgrade to version 12 last year but to my horror during the first week of class the nfs mounts did not work for multiple users. I never thought to test for more then one person at at time!! I have not been able to find a fix for the upgrade so I must remain at 10.04. NFS mounts are a must for student home directory access on the servers. All desktops are 10.04 as are the servers. Does anyone have a solution to the wireless problem in version 10.04/ or the nfs mounts solution in version 12. Thank you so much. Val

    Read the article

  • 13.10 suspend kills wifi

    - by ser
    i tried to post to this thread Hardware wireless switch has no effect after suspend and 13.10 upgrade for if i understand their question i am having the same problem but the answer option won't work for me and maybe i am not supposed to post there anyway, i dunno lol...ack when i come out of suspend, the wireless is disconnected with the only way to get it to reinitialize/be recognized is to do a full restart. at first i thought it was my gnome shell (for lock screen disappeared there with 13.10) but when i switched to the default ubuntu it's still doing it and it's kinda driving me nuts for i have to reopen all my files and browsers/tabs/windows everytime. i'm only a geekling so i don't know how to show the terminal stuff the above asker shows, but it sounds like the same issue and it only started with 13.10 upgrade a few days ago. any help would be much appreciated!!! thanks so much ser

    Read the article

  • How can I Include Multiples Tables in my linq to entities eager loading using mvc4 C#

    - by EBENEZER CURVELLO
    I have 6 classes and I try to use linq to Entities to get the SiglaUF information of the last deeper table (in the view - MVC). The problem is I receive the following error: "The ObjectContext instance has been disposed and can no longer be used for operations that require a connection." The view is like that: > @model IEnumerable<DiskPizzaDelivery.Models.EnderecoCliente> > @foreach (var item in Model) { > @Html.DisplayFor(modelItem => item.CEP.Cidade.UF.SiglaUF) > } The query that i use: var cliente = context.Clientes .Include(e => e.Enderecos) .Include(e1 => e1.Enderecos.Select(cep => cep.CEP)) .SingleOrDefault(); The question is: How Can I improve this query to pre loading (eager loading) "Cidade" and "UF"? See below the classes: public partial class Cliente { [Key] [DatabaseGeneratedAttribute(DatabaseGeneratedOption.Identity)] public int IdCliente { get; set; } public string Email { get; set; } public string Senha { get; set; } public virtual ICollection<EnderecoCliente> Enderecos { get; set; } } public partial class EnderecoCliente { public int IdEndereco { get; set; } public int IdCliente { get; set; } public string CEPEndereco { get; set; } public string Numero { get; set; } public string Complemento { get; set; } public string PontoReferencia { get; set; } public virtual Cliente Cliente { get; set; } public virtual CEP CEP { get; set; } } public partial class CEP { public string CodCep { get; set; } public string Tipo_Logradouro { get; set; } public string Logradouro { get; set; } public string Bairro { get; set; } public int CodigoUF { get; set; } public int CodigoCidade { get; set; } public virtual Cidade Cidade { get; set; } } public partial class Cidade { public int CodigoCidade { get; set; } public string NomeCidade { get; set; } public int CodigoUF { get; set; } public virtual ICollection<CEP> CEPs { get; set; } public virtual UF UF { get; set; } public virtual ICollection<UF> UFs { get; set; } } public partial class UF { public int CodigoUF { get; set; } public string SiglaUF { get; set; } public string NomeUF { get; set; } public int CodigoCidadeCapital { get; set; } public virtual ICollection<Cidade> Cidades { get; set; } public virtual Cidade Cidade { get; set; } } var cliente = context.Clientes .Where(c => c.Email == email) .Where(c => c.Senha == senha) .Include(e => e.Enderecos) .Include(e1 => e1.Enderecos.Select(cep => cep.CEP)) .SingleOrDefault(); Thanks!

    Read the article

  • eth0 missing after upgrading from Hoary to Dapper

    - by Twisol
    I'm trying to upgrade a fairly old server that's been running Hoary for the last five years. I followed the directions on the wiki, but when I restarted after upgrading to Dapper, eth0 disappeared from ifconfig -a. I can see two ethernet adapters in lspci and lshw, and if I put in an Ubuntu 10.10 LiveCD it registers eth0 and eth1 perfectly well. Their MAC addresses also match what's in /etc/iftab. It was working fine before the upgrade, and I have no idea what else I should be trying at this point. The server is entirely cut off from the network right now. EDIT: /etc/udev/rules.d/70-persistent-net.rules doesn't exist, either.

    Read the article

  • ??Oracle EBS R12 on Sun database Machine MAA&HPA ????

    - by longchun.zhu
    ??????1????,3??hands-on ?????, ?????????XXX,XXX Partners ??OSS,SC,??iTech ?20????,,??????????,?????????????!??????,????????????????????????! ??,??????,???????,???????,??EBS ???????,??,????ORACLE ?N?????????????,????????????? 5? ?????????, ?????????,????????2T??..??????????PPT ?????????!???eric.gao ??????????? ?????????, ????eric,cindy,??????????! ?????????! ?????,???????????,????,????????... Course Objectives ??: After completing this course, you can be able to do the following : •Understand EBS R12 on Exadata MAA •Install and Configure Oracle EBS R12 Single Instance •Apply Chinese Package on EBS R12 •Upgrade Application DB Version to 11gR2 •Deploy Clone EBS R12 to Sun Database Machine •Migration File System to Exadata Storage ASM •Converting Application DB to RAC •Configure EBS R12 MAA with Exadata 1: Oracle EBS R12.1.1 Single Instance Install 2: Apply Chinese Package on EBS R12 3: Upgrade Application DB Version to 11gR2 4: Clone EBS R12 to Sun Database Machine 5: Migrate File Systems to ASM Storage 6: Converting Application DB to RAC 7: Configure EBS MAA with Exadata

    Read the article

  • ????????2013?11??: ??RACCheck?????RAC??

    - by Allen Gao
    ???1?????????,????RACCheck????????????,????RACCheck ???????????,??RACCheck ????“11.2.0.3 Upgrade Readiness Feature”? ??,???????????,???????RACCheck ???????????????? :+ ?????????+ RACCheck ??,??,??????????+ RACcheck ??? 11.2.0.3 Upgrade Readiness+ RACcheck ??WebEx??????(???)?????:2013?11?13?15:00(????) ????: 592 310 106????????  1. ??????:https://oracleaw.webex.com/oracleaw/onstage/g.php?d=592310106&t=a2. ????????????????,???????????,?????????InterCall????????Webex???????,???????????,??????:   - ????ID: 25261419   - ????????: 108001201870   - ?????????: 108007121869    - ????: 8009 661 55   - ????: 00801044259   - ????????????????MOS?? 1148600.1 ???????:????????????,??????????(25261419)??????(First Name and Last Name) ????????????????? MOS ?? 1456176.1??????????????????????,?????????????????????????????MOS ?? 1456176.1“Archived 2013" (???)

    Read the article

  • A basic T4 template for generating Model Metadata in ASP.NET MVC2

    - by rajbk
    I have been learning about T4 templates recently by looking at the awesome ADO.NET POCO entity generator. By using the POCO entity generator template as a base, I created a T4 template which generates metadata classes for a given Entity Data Model. This speeds coding by reducing the amount of typing required when creating view specific model and its metadata. To use this template, Download the template provided at the bottom. Set two values in the template file. The first one should point to the EDM you wish to generate metadata for. The second is used to suffix the namespace and classes that get generated. string inputFile = @"Northwind.edmx"; string suffix = "AutoMetadata"; Add the template to your MVC 2 Visual Studio 2010 project. Once you add it, a number of classes will get added to your project based on the number of entities you have.    One of these classes is shown below. Note that the DisplayName, Required and StringLength attributes have been added by the t4 template. //------------------------------------------------------------------------------ // <auto-generated> // This code was generated from a template. // // Changes to this file may cause incorrect behavior and will be lost if // the code is regenerated. // </auto-generated> //------------------------------------------------------------------------------   using System; using System.ComponentModel; using System.ComponentModel.DataAnnotations;   namespace NorthwindSales.ModelsAutoMetadata { public partial class CustomerAutoMetadata { [DisplayName("Customer ID")] [Required] [StringLength(5)] public string CustomerID { get; set; } [DisplayName("Company Name")] [Required] [StringLength(40)] public string CompanyName { get; set; } [DisplayName("Contact Name")] [StringLength(30)] public string ContactName { get; set; } [DisplayName("Contact Title")] [StringLength(30)] public string ContactTitle { get; set; } [DisplayName("Address")] [StringLength(60)] public string Address { get; set; } [DisplayName("City")] [StringLength(15)] public string City { get; set; } [DisplayName("Region")] [StringLength(15)] public string Region { get; set; } [DisplayName("Postal Code")] [StringLength(10)] public string PostalCode { get; set; } [DisplayName("Country")] [StringLength(15)] public string Country { get; set; } [DisplayName("Phone")] [StringLength(24)] public string Phone { get; set; } [DisplayName("Fax")] [StringLength(24)] public string Fax { get; set; } } } The gen’d class can be used from your project by creating a partial class with the entity name and setting the MetadataType attribute.namespace MyProject.Models{ [MetadataType(typeof(CustomerAutoMetadata))] public partial class Customer { }} You can also copy the code in the metadata class generated and create your own ViewModel class. Note that the template is super basic  and does not take into account complex properties. I have tested it with the Northwind database. This is a work in progress. Feel free to modify the template to suite your requirements. Standard disclaimer follows: Use At Your Own Risk, Works on my machine running VS 2010 RTM/ASP.NET MVC 2 AutoMetaData.zip Mr. Incredible: Of course I have a secret identity. I don't know a single superhero who doesn't. Who wants the pressure of being super all the time?

    Read the article

  • Things I've noticed with DVCS

    - by Wes McClure
    Things I encourage: Frequent local commits This way you don't have to be bothered by changes others are making to the central repository while working on a handful of related tasks.  It's a good idea to try to work on one task at a time and commit all changes at partitioned stopping points.  A local commit doesn't have to build, just FYI, so a stopping point doesn't mean a build point nor a point that you can push centrally.  There should be several of these in any given day.  2 hours is a good indicator that you might not be leveraging the power of frequent local commits.  Once you have verified a set of changes works, save them away, otherwise run the risk of introducing bugs into it when working on the next task.  The notion of a task By task I mean a related set of changes that can be completed in a few hours or less.  In the same token don’t make your tasks so small that critically related changes aren’t grouped together.  Use your intuition and the rest of these principles and I think you will find what is comfortable for you. Partial commits Sometimes one task explodes or unknowingly encompasses other tasks, at this point, try to get to a stopping point on part of the work you are doing and commit it so you can get that out of the way to focus on the remainder.  This will often entail committing part of the work and continuing on the rest. Outstanding changes as a guide If you don't commit often it might mean you are not leveraging your version control history to help guide your work.  It's a great way to see what has changed and might be causing problems.  The longer you wait, the more that has changed and the harder it is to test/debug what your changes are doing! This is a reason why I am so picky about my VCS tools on the client side and why I talk a lot about the quality of a diff tool and the ability to integrate that with a simple view of everything that has changed.  This is why I love using TortoiseHg and SmartGit: they show changed files, a diff (or two way diff with SmartGit) of the current selected file and a commit message all in one window that I keep maximized on one monitor at all times. Throw away / stash commits There is extreme value in being able to throw away a commit (or stash it) that is getting out of hand.  If you do not commit often you will have to isolate the work you want to commit from the work you want to throw away, which is wasted productivity and highly prone to errors.  I find myself doing this about once a week, especially when doing exploratory re-factoring.  It's much easier if I can just revert all outstanding changes. Sync with the central repository daily The rest of us depend on your changes.  Don't let them sit on your computer longer than they have to.  Waiting increases the chances of merge conflict which just decreases productivity.  It also prohibits us from doing deploys when people say they are done but have not merged centrally.  This should be done daily!  Find a way to partition the work you are doing so that you can sync at least once daily. Things I discourage: Lots of partial commits right at the end of a series of changes If you notice lots of partial commits at the end of a set of changes, it's likely because you weren't frequently committing, nor were you watching for the size of the task expanding beyond a single commit.  Chances are this cost you productivity if you use your outstanding changes as a guide, since you would have an ever growing list of changes. Committing single files Committing single files means you waited too long and no longer understand all the changes involved.  It may mean there were overlapping changes in single files that cannot be isolated.  In either case, go back to the suggestions above to avoid this.  Committing frequently does not mean committing frequently right at the end of a day's work. It should be spaced out over the course of several tasks, not all at the end in a 5 minute window.

    Read the article

  • Silverlight Binding with multiple collections

    - by George Evjen
    We're designing some sport specific applications. In one of our views we have a gridview that is bound to an observable collection of Teams. This is pretty straight forward in terms of getting Teams bound to the GridView. <telerik:RadGridView Grid.Row="0" Grid.Column="0" x:Name="UsersGrid" ItemsSource="{Binding TeamResults}" SelectedItem="{Binding SelectedTeam, Mode=TwoWay}"> <telerik:RadGridView.Columns> <telerik:GridViewDataColumn Header="Name/Group" DataMemberBinding="{Binding TeamName}" MinWidth="150"></telerik:GridViewDataColumn> </telerik:RadGridView.Columns> </telerik:RadGridView> We use the observable collection of teams as our items source and then bind the property of TeamName to the first column. You can set the binding to mode=TwoWay, we use a dialog where we edit the selected item, so our binding here is not set to two way. The issue comes when we want to bind to a property that has another collection in it. To continue on our code from above, we have an observable collection of teams, within that collection we have a collection of KeyPeople. We get this collection using RIA Serivces with the code below. return _TeamsRepository.All().Include("KeyPerson"); Here we are getting all the teams and also including the KeyPerson entity. So when we are done with our Load we will end up with an observable collection of Teams with a navigation property / entity of KeyPerson. Within this KeyPerson entity is a list of people associated with that particular team. We want to display the head coach from this list of KeyPersons. This list currently has a list of ten or more people that are bound to this team, but we just want to display the Head Coach in the column next to team name. The issue becomes how do we bind to this included entity? I have found about three different ways to solve this issue. The way that seemed to fit us best is to utilize the features within RIA Services. We can create client side properties that will do the work for us. We will create in the client side library a partial class of Team. We will end up in our library a file that is Team.shared.cs. The code below is what we will put into our partial team class. public KeyPerson Coach        {            get            {                if (this.KeyPerson != null && this.KeyPerson.Any())                { return this.KeyPerson.Where(x => x.RelationshipType == “HeadCoach”).FirstOrDefault(); }                 return null;            }        } We will return just the person that is the Head Coach and then be able to bind that and any other additional properties that we need. <telerik:GridViewDataColumn Header="Coach" DataMemberBinding="{Binding Coach.Name}" MinWidth="150"></telerik:GridViewDataColumn> There are other ways that we could have solved this issue but we felt that creating a partial class through RIA Services best suited our needs.

    Read the article

  • solr php extension fails to run on newest Debian Wheezy

    - by hijarian
    I'm trying to use the Solr PHP extension on the recently-upgraded Debian Wheezy. It installs both from PECL and from sources flawlessly but instead of giving me expected functionality it gives me this on every PHP run: PHP Warning: PHP Startup: Unable to load dynamic library '/usr/lib/php5/20100525/solr.so' - /usr/lib/php5/20100525/solr.so: undefined symbol: curl_easy_getinfo in Unknown on line 0 Also scripts which use the extension throws an error PHP Error[2]: include(SolrClient.php): failed to open stream: No such file or directory in file <...path to my autoloader...> My main point is that it was set up before and worked like a charm. In the upgrade among the relevant packages only the versions of PHP and libcurl was changed. Instance of Solr itself was left as is. I have all possible libcurl libraries: $ locate libcurl ... /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.3 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4 /usr/lib/x86_64-linux-gnu/libcurl-gnutls.so.4.2.0 /usr/lib/x86_64-linux-gnu/libcurl.a /usr/lib/x86_64-linux-gnu/libcurl.la /usr/lib/x86_64-linux-gnu/libcurl.so /usr/lib/x86_64-linux-gnu/libcurl.so.3 /usr/lib/x86_64-linux-gnu/libcurl.so.4 /usr/lib/x86_64-linux-gnu/libcurl.so.4.2.0 ... /usr/lib32/libcurl.so.3 /usr/lib32/libcurl.so.4 /usr/lib32/libcurl.so.4.2.0 ... I have instaled the php5-curl package version 5.4.4-2 with aptitude. I installed the Sorl extensions both with sudo pecl install solr (with various combinations of -f and -n flags and tried solr-beta too) and with wget ... cd ... phpize ./configure make make install I'm installing the 1.0.2 version of extension because it worked before the upgrade from Squeeze to Wheezy. As I said earlier, extension installs without any errors. I have already added the extension=solr.so incantation to the /etc/php5/mods-available/solr.ini What magic should I do to make solr extension work? Is this true that the only solution that I have is to downgrade the libcurl version as it was before the upgrade?

    Read the article

  • samba share not on network after upgrading to Ubuntu 12.04LTS. [migrated]

    - by Sylvain Huard
    I just upgraded an old Ubuntu box to 12.04LTS (machine named A-Ubuntu). This is an upgrade not a format re-install. All the accounts and config were preserved. The basic setup is a local network with 2 Ubuntu machines (let say A-Ubuntu, B-Ubuntu) and a MAC (C-MAC). Before the upgrade, all of them could see each other by their names not only the IP address. The local network has a D-Link Router where everybody is connected with RJ-45 wired etherenet (not wi-fi). Since the A-Ubuntu upgrade, we can't see this machine name on the Network and its name is not on machine list in the D-Link router anymore. We can see it's IP address only. I can't access A-Ubuntu from the other two by its name but I can ping it with its address (192.168.0.109). From A-Ubuntu, I can connect and see the shared samba folders on B-Ubuntu and C-MAC. But from B-Ubuntu and C-MAc, I can't connect to A-Ubuntu. Correct me if I'm wrong but this tells me that Samba should be fine and the real problem is that A-Ubuntu does not advertise its name on the Network so the D-Link does not have it in its table so nobody else finds it. After a lot of googling, I see that it is the job of avahi and mdns to do so. Those packages are running, I checked multiple config files for samba, avahi, mdns to see as if it is like the examples on the WEB and also similar to what I find on the working B-Ubuntu machine. This is the same. I did multiple service restart with samba, avahi, remove the firewall to make sure it does not block the hostname broadcast. I rebooted multiple time to make sure the update I was making were effective. Still, Can't see the A-Ubuntu name on the network. Any idea what it can be?, Where to look next?

    Read the article

  • How to fix a bootable USB Kubuntu installation when the drive has maxed out?

    - by NoCatharsis
    I used Universal-USB-Installer-v1.5.1 from PenDriveLinux.com with Kubuntu 10.04 so I could set up my 4GB flash drive as a totally independent installation. Unfortunately, there was an OS upgrade available which Kubuntu downloaded and attempted to install. This, along with some other software, apparently maxed out my drive before I realized it. Now when I try to boot from the drive, everything boots as normal to the OS boot screen where I select "Boot from this Kubuntu USB Installation." The startup process initiates, then stalls about halfway through and hangs indefinitely. I'm guessing the drive is trying to use space it doesn't have and completely stops working. I realize that once the OS upgrade is in place, the old files could be deleted for a potential 700MB space gain. However, I just have no way to get into the OS and complete the upgrade. My main OS is Windows 7. Is there a way I can fix this issue from within Windows without formatting the entire drive and reinstalling Kubuntu from scratch?

    Read the article

  • Upgrading PHP, MySQL old-passwords issue

    - by Rushyo
    I've inherited a Windows 2k3 server running an XAMPP-installation from the stone age. I needed to upgrade PHP to facilitate an upgrade to MediaWiki to facilitate a new MediaWiki extension (to facilitate some documentation to facilitate doing my job to facilitate getting paid to facilit... you get the idea). However... installing a new version of PHP resulted in PHP's MySQL libraries refusing to communicate using MySQL's 'old style' 152-bit passwords. Not a problem in theory. The MySQL installation is post-4.1, so it should have the functionality to upgrade the user's passwords from 152-bit to 328-bit (what a weird hashing algorithm...). I ran the following: SET PASSWORD = PASSWORD('foo'); on MySQL but querying: SELECT user, password FROM mysql.user; returned just the same password I started out with - 152-bit. Now... I suspect you're thinking 'AHA! old-passwords is on!'. Unfortunately it's not - I've disabled it in the configuration (explicitly set it to 0), made doubly sure I have an absolute reference to that configuration file and ensured the service isn't using the --old-passwords flag. The service was reset after each and every operation. So I went onto another system and generated the 328-bit hash on there, copying the hash over to the first MySQL instance. Unfortunately, that didn't work either (I did remember to FLUSH PRIVILEGES). The application error is: "'mysqlnd cannot connect to MySQL 4.1+ using the old insecure authentication. Please use an administration tool [...snip...] Is there anything else I can try to get PHP to recognise MySQL as not using the 'old insecure authentication'? MySQL seems to be stuck in 'old-passwords' mode and I can't get it out of it.

    Read the article

  • I want to dual boot Windows 8 on a Macbook Pro that doesn't already have Windows. Do I have to buy Windows twice?

    - by Cam Jackson
    My girlfriend just bought a Macbook Pro, and she wants to to dual boot OSX with Windows. Specifically, she would like to use Windows 8. What I already know is the following: Windows 8 discs are only meant for upgrading from previous versions of Windows Windows 8 discs can be used to do a clean install, but (officially) only if there's already a legit version of Windows on the hard disk I've read somewhere of a disc being used to install Windows 8 on a fresh, out-of-the-box hard drive, and it all went well until the activation phase, where it said that the disc could only be used for upgrades The logical conclusion would be that in my circumstance, the only option is to buy a full (non-upgrade) retail copy of Windows 7, install that using boot camp, then load up Windows 7, insert the Windows 8 upgrade disc and do the 7-8 upgrade. However, I've read quite a few blog posts of people installing Windows 8 using bootcamp (e.g., Ars Technica, which leads me to believe that it might be possible to do so without installing Win7 first. The problem is that I'm not sure if these people were using preview versions, which obviously won't have the license issues down the track. Can anyone provide a definitive answer as to how to put Win8 on a Mac?

    Read the article

  • Group Policy - Published software not upgrading

    - by VokinLoksar
    I'm testing this with mercurial MSIs, but it's the same for other packages. I've created a new group policy and added an old version of mercurial to User software installation as a Published package. On a Windows 7 client I install the package through Programs and Features. The installation works fine. Now, I would like to publish an updated version of mercurial. I create a new Published package. Under 'Upgrades' I configure it to replace (upgrade also doesn't work) the old version and mark this upgrade as 'Required'. The old package is not removed. The Windows 7 client is then restarted. When I log back in, I see a status message saying something like 'Removing managed software Mercurial ...'. There is no message about installation of the upgrade. If I look in Programs and Features, I can see the new version of mercurial listed. However, the actual mercurial directory under Program Files is missing. It's as though the installation recorded information about the MSI, but didn't actually install anything after removing the old version. As I mentioned, this isn't specific to mercurial. I've tried using other apps and have yet to find one that can be upgraded via a Published package. Using Assigned packages in Computer Configuration works without problems, but I would like this software to be optional rather than required. Ideas?

    Read the article

  • Where can I legally obtain the 64bit version of Windows 8?

    - by Harsha K
    No, I am not looking to pirate. I bought a key through the Upgrade assistant (for just $15 due to the upgrade offer), but it downloaded an iso file that was between 2.3 and 2.5 GB. Which doesn't make sense to me, because the Evaluation version of Windows 8 x64 is closer to 3.4 GB in size. I assumed the Upgrade Assistant would be intelligent enough to realize that it is being run on a Windows 7 x64 machine and by extension, download the x64 code. Previously, I was able to legally download the ISOs (sans the keys, of course) from the Digital River host. I do not see an option to do that. I'm not interested in risking downloading a tampered ISO. I want to do it through Microsoft channels, but I just don't see how. As you may imagine, search terms such as "Windows 8 official download link" result in a plethora of obviously spyware infested piracy sites. If there's any non-exposing way for me to prove that I have legally purchased Windows and I'm genuinely looking for this answer, please let me know. For reference to what I am looking for it is similar to the answer given in this question for Windows 7: Where do I download Windows 7 (legally from Microsoft)?

    Read the article

  • How to handle payment types with varying properties in the most elegant way.

    - by Byron Sommardahl
    I'm using ASP.NET MVC 2. Keeping it simple, I have three payment types: credit card, e-check, or "bill me later". I want to: choose one payment type display some fields for one payment type in my view run some logic using those fields (specific to the type) display a confirmation view run some more logic using those fields (specific to the type) display a receipt view Each payment type has fields specific to the type... maybe 2 fields, maybe more. For now, I know how many and what fields, but more could be added. I believe the best thing for my views is to have a partial view per payment type to handle the different fields and let the controller decide which partial to render (if you have a better option, I'm open). My real problem comes from the logic that happens in the controller between views. Each payment type has a variable number of fields. I'd like to keep everything strongly typed, but it feels like some sort of dictionary is the only option. Add to that specific logic that runs depending on the payment type. In an effort to keep things strongly typed, I've created a class for each payment type. No interface or inherited type since the fields are different per payment type. Then, I've got a Submit() method for each payment type. Then, while the controller is deciding which partial view to display, it also assigns the target of the submit action. This is not elegant solution and feels very wrong. I'm reaching out for a hand. How would you do this?

    Read the article

  • RJS error: TypeError: element is null

    - by salilgaikwad
    Hi All, I got RJS error: TypeError: element is null while using ajax. I used in view <%= periodically_call_remote(:url={:action='get_user_list', :id='1'}, :frequency = '5') % in controller render :update do |page| page.replace_html 'chat_area', :partial => 'chat_area', :object = [@chats, @user] if @js_update end in partial chat_area <% if [email protected]? && !show_div(@chats).blank?% <% show_div_id=show_div(@chats) % <% for chat in @chats % " style="display:<%= (chat.id == show_div_id)? 'block' : 'none' %;" <% form_remote_for(:chat, :url => {:controller=>'chats', :action='create', :id=1}, :html={:name = "form_#{chat.id}"}, :complete="resetContent('#{chat.id}');") do |f| % <%= f.hidden_field :sessionNo, :value=chat.sessionNo % <%= f.text_area :chatContent, :id= "chatContent_field_#{chat.id}", :cols="100", :rows="6", :onKeyPress="return submitenter(this,event);" % ')"/ <% end % </div> <% end % <% else % <% end % My div present in index.html.erb <table border="0" width="100%" cellspacing="0" cellpadding="0"> <tbody><tr> <td align="left" width="80%" valign="top" style=""> <%= text_area :chat, :chatContent, :id=> "chatContent_field", :cols=>"100", :rows=>"6" %> </td> <td align="left" width="20%" valign="bottom" style="padding-left:10px;padding-left:10px;x" > <div id="chat_area"> <%= render :partial => 'chat_area' %> </div> </td> </tr> </tbody> </table> Any help is appreciated. Regards, Salil Gaikwad

    Read the article

  • ASP.NET/C# - Deserialize XML, XSD.exe created multiple classes

    - by Barryman9000
    I'm trying to deserialize some XML (returned from a web service) into an object that I created using XSD.exe. The XSD executable created a .CS file with a different partial class for each parent node in the XML. For Example, the beginning of the XML looks like this (why can't I post xml here? sorry, this is ugly): <disclaimer><notificationDescription>Additional charges will apply.</notificationDescription></disclaimer><quote><masterQuote></masterQuote></quote> And the Class generated by the XSD.exe has a partial class named disclaimer with a get; set; string for notificationDescription. And another partial class for quoteMasterQuote with the corresponding child nodes as public strings. How can I deserialize this XML file into multiple classes? I found this code, but it seems like it'll only work for one object. public static PricingResponse2 DeSerialize(string _xml) { PricingResponse2 _resp = new PricingResponse2(); StringReader _strReader = new StringReader(_xml); XmlSerializer _serializer = new XmlSerializer(_resp.GetType()); XmlReader _xmlReader = new XmlTextReader(_strReader); try { _resp = (PricingResponse2)_serializer.Deserialize(_xmlReader); return _resp; } catch (Exception ex) { string _error = ex.Message; throw; } finally { _xmlReader.Close(); _strReader.Close(); _strReader.Dispose(); } } This is the first time I've tried this, so I'm a little lost.

    Read the article

  • When using Data Annotations with MVC, Pro and Cons of using an interface vs. a MetadataType

    - by SkippyFire
    If you read this article on Validation with the Data Annotation Validators, it shows that you can use the MetadataType attribute to add validation attributes to properties on partial classes. You use this when working with ORMs like LINQ to SQL, Entity Framework, or Subsonic. Then you can use the "automagic" client and server side validation. It plays very nicely with MVC. However, a colleague of mine used an interface to accomplish exactly the same result. it looks almost exactly the same, and functionally accomplishes the same thing. So instead of doing this: [MetadataType(typeof(MovieMetaData))] public partial class Movie { } public class MovieMetaData { [Required] public object Title { get; set; } [Required] [StringLength(5)] public object Director { get; set; } [DisplayName("Date Released")] [Required] public object DateReleased { get; set; } } He did this: public partial class Movie :IMovie { } public interface IMovie { [Required] object Title { get; set; } [Required] [StringLength(5)] object Director { get; set; } [DisplayName("Date Released")] [Required] object DateReleased { get; set; } } So my question is, when does this difference actually matter? My thoughts are that interfaces tend to be more "reusable", and that making one for just a single class doesn't make that much sense. You could also argue that you could design your classes and interfaces in a way that allows you to use interfaces on multiple objects, but I feel like that is trying to fit your models into something else, when they should really stand on their own. What do you think?

    Read the article

  • refactor LINQ TO SQL custom properties that instantiate datacontext

    - by Thiago Silva
    I am working on an existing ASP.NET MVC app that started small and has grown with time to require a good re-architecture and refactoring. One thing that I am struggling with is that we've got partial classes of the L2S entities so we could add some extra properties, but these props create a new data context and query the DB for a subset of data. This would be the equivalent to doing the following in SQL, which is not a very good way to write this query as oppsed to joins: SELECT tbl1.stuff, (SELECT nestedValue FROM tbl2 WHERE tbl2.Foo = tbl1.Bar), tbl1.moreStuff FROM tbl1 so in short here's what we've got in some of our partial entity classes: public partial class Ticket { public StatusUpdate LastStatusUpdate { get { //this static method call returns a new DataContext but needs to be refactored var ctx = OurDataContext.GetContext(); var su = Compiled_Query_GetLastUpdate(ctx, this.TicketId); return su; } } } We've got some functions that create a compiled query, but the issue is that we also have some DataLoadOptions defined in the DataContext, and because we instantiate a new datacontext for getting these nested property, we get an exception "Compiled Queries across DataContexts with different LoadOptions not supported" . The first DataContext is coming from a DataContextFactory that we implemented with the refactorings, but this second one is just hanging off the entity property getter. We're implementing the Repository pattern in the refactoring process, so we must stop doing stuff like the above. Does anyone know of a good way to address this issue?

    Read the article

  • Database design for credit based purchases

    - by FreshCode
    I need an elegant way to implement credit-based purchases for an online store with a small variety of products which can be purchased using virtual credit or real currency. Alternatively, products could only be priced in credits. Previous work I have implemented credit-based purchasing before using different product types (eg. Credit, Voucher or Music) with post-order processing to assign purchased credit to users in the form of real currency, which could subsequently be used to discount future orders' charge totals. This worked fairly well as a makeshift solution, but did not succeed in disconnecting the virtual currency from the real currency, which is what I'd like to do, since spending credits is psychologically easier for customers than spending real currency. Design I need guidance on designing the database correctly with support for the simultaneous bulk purchase of credits at a discount along with real currency products. Alternatively, should all products be priced in credits and only credit have a real currency value? Existing Database Design Partial Products table: ProductId Title Type UnitPrice SalePrice Partial Orders table: OrderId UserId (related to Users table, not shown) Status Value Total Partial OrderItems table (similar to CartItems table): OrderItemId OrderId (related to Orders table) ProductId (related to Products table) Quantity UnitPrice SalePrice Prospective UserCredits table: CreditId UserId (related to Users table, not shown) Value (+/- value. Summed over time to determine saldo.) Date I'm using ASP.NET MVC and LINQ-to-SQL on a SQL Server database.

    Read the article

< Previous Page | 98 99 100 101 102 103 104 105 106 107 108 109  | Next Page >