Search Results

Search found 7677 results on 308 pages for 'wordpress shopping cart plugins'.

Page 275/308 | < Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >

  • Setup and configure a MVC4 project for Cloud Service(web role) and SQL Azure

    - by MagnusKarlsson
    I aim at keeping this blog post updated and add related posts to it. Since there are a lot of these out there I link to others that has done kind of the same before me, kind of a blog-DRY pattern that I'm aiming for. I also keep all mistakes and misconceptions for others to see. As an example; if I hit a stacktrace I will google it if I don't directly figure out the reason for it. I will then probably take the most plausible result and try it out. If it fails because I misinterpreted the error I will not delete it from the log but keep it for future reference and for others to see. That way people that finds this blog can see multiple solutions for indexed stacktraces and I can better remember how to do stuff. To avoid my errors I recommend you to read through it all before going from start to finish.The steps:Setup project in VS2012. (msdn blog)Setup Azure Services (half of mpspartners.com blog)Setup connections strings and configuration files (msdn blog + notes)Export certificates.Create Azure package from vs2012 and deploy to staging (same steps as for production).Connections string error Set up the visual studio project:http://blogs.msdn.com/b/avkashchauhan/archive/2011/11/08/developing-asp-net-mvc4-based-windows-azure-web-role.aspx Then login in to Azure to setup the services:Stop following this guide at the "publish website" part since we'll be uploading a package.http://www.mpspartners.com/2012/09/ConfiguringandDeployinganMVC4applicationasaCloudServicewithAzureSQLandStorage/ When set up (connection strings for debug and release and all), follow this guide to set up the configuration files:http://msdn.microsoft.com/en-us/library/windowsazure/hh369931.aspxTrying to package our application at this step will generate the following warning:3>MvcWebRole1(0,0): warning WAT170: The configuration setting 'Microsoft.WindowsAzure.Plugins.Diagnostics.ConnectionString' is set up to use the local storage emulator for role 'MvcWebRole1' in configuration file 'ServiceConfiguration.Cloud.cscfg'. To access Windows Azure storage services, you must provide a valid Windows Azure storage connection string. Right click the web role under roles in solution manager and choose properties. Choose "Service configuration: Cloud". At "specify storage account credentials" we will copy/paste our account name and key from the Azure management platform.3.1 4. Right click Remote desktop Configuration and select certificate and export to file. We need to allow it in Portal manager.4.15 Now right click the cloud project and select package.5.1 Showing dialogue box. 5.2 Package success Now copy the path to the packaged file and go to management portal again. Click your web role and choose staging (or production). Upload. 5.3Tick the box about the single instance if that's what you want or you don't know what it means. Otherwise the following will happen (see image 4.6)5.4 Dialogue box When you have clicked the symbol for accept- button you will see the following screen with some green indicators down at the right corner. Click them if you want to see status.5.5 Information screen.5.6 "Failed to deploy application. The upload application has at least one role with only one instance. We recommend that you deploy at least two instances per role to ensure high availability in case one of the instances becomes unavailable. "To fix, go to step 5.4If you forgot to (or just didn't know you were supposed to) export your certificates. The following error will occur. Side note, the following thread suggests. To prevent: "Enable Remote Desktop for all roles" when right-clicking BIAB and choosing "Package". But in my case it was the not so present certificates. I fund the solution here.http://social.msdn.microsoft.com/Forums/en-US/dotnetstocktradersampleapplication/thread/0e94c2b5-463f-4209-86b9-fc257e0678cd5.75.8 Success! 5.9 Nice URL n' all. (More on that at another blog post).6. If you try to login and getWhen this error occurs many web sites suggest this is because you need:http://nuget.org/packages/Microsoft.AspNet.Providers.LocalDBOr : http://nuget.org/packages/Microsoft.AspNet.ProvidersBut it can also be that you don't have the correct setup for converting connectionstrings between your web.config to your debug.web.config(or release.web.config, whichever your using).Run as suggested in the "ordinary project in your solution. Go to the management portal and click update.

    Read the article

  • Universities 2030: Learning from the Past to Anticipate the Future

    - by Mohit Phogat
    What will the landscape of international higher education look like a generation from now? What challenges and opportunities lie ahead for universities, especially “global” research universities? And what can university leaders do to prepare for the major social, economic, and political changes—both foreseen and unforeseen—that may be on the horizon? The nine essays in this collection proceed on the premise that one way to envision “the global university” of the future is to explore how earlier generations of university leaders prepared for “global” change—or at least responded to change—in the past. As the essays in this collection attest, many of the patterns associated with contemporary “globalization” or “internationalization” are not new; similar processes have been underway for a long time (some would say for centuries).[1] A comparative-historical look at universities’ responses to global change can help today’s higher-education leaders prepare for the future. Written by leading historians of higher education from around the world, these nine essays identify “key moments” in the internationalization of higher education: moments when universities and university leaders responded to new historical circumstances by reorienting their relationship with the broader world. Covering more than a century of change—from the late nineteenth century to the early twenty-first—they explore different approaches to internationalization across Europe, Asia, Australia, North America, and South America. Notably, while the choice of historical eras was left entirely open, the essays converged around four periods: the 1880s and the international extension of the “modern research university” model; the 1930s and universities’ attempts to cope with international financial and political crises; the 1960s and universities’ role in an emerging postcolonial international development apparatus; and the 2000s and the rise of neoliberal efforts to reform universities in the name of international economic “competitiveness.” Each of these four periods saw universities adopt new approaches to internationalization in response to major historical-structural changes, and each has clear parallels to today. Among the most important historical-structural challenges that universities confronted were: (1) fluctuating enrollments and funding resources associated with global economic booms and busts; (2) new modes of transportation and communication that facilitated mobility (among students, scholars, and knowledge itself); (3) increasing demands for applied science, technical expertise, and commercial innovation; and (4) ideological reconfigurations accompanying regime changes (e.g., from one internal regime to another, from colonialism to postcolonialism, from the cold war to globalized capitalism, etc.). Like universities today, universities in the past responded to major historical-structural changes by internationalizing: by joining forces across space to meet new expectations and solve problems on an ever-widening scale. Approaches to internationalization have typically built on prior cultural or institutional ties. In general, only when the benefits of existing ties had been exhausted did universities reach out to foreign (or less familiar) partners. As one might expect, this process of “reaching out” has stretched universities’ traditional cultural, political, and/or intellectual bonds and has invariably presented challenges, particularly when national priorities have differed—for example, with respect to curricular programs, governance structures, norms of academic freedom, etc. Strategies of university internationalization that either ignore or downplay cultural, political, or intellectual differences often fail, especially when the pursuit of new international connections is perceived to weaken national ties. If the essays in this collection agree on anything, they agree that approaches to internationalization that seem to “de-nationalize” the university usually do not succeed (at least not for long). Please continue reading the other essays at http://globalhighered.wordpress.com/

    Read the article

  • Open Source MariaDB, the MySQL fork to replace MySQL?

    - by Jenson
    Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 Frankly speaking, I’ve been out of touch with the Open source world for quite some time. Until recently, after I’ve joined the new government agency, I managed to do some research while given time to learn new technologies and languages. I started reading tech blogs and tech news again (since I’m not as busy as before where I need to rush for project deadlines in and out), and I spotted this MariaDB that really attracts my attention, this is the link to ZDNet article - http://www.zdnet.com/open-source-mariadb-a-mysql-fork-challenges-oracle-7000008311/ Open-Source MariaDB, a MySQL fork, challenges Oracle Yes, you’re right, MariaDB is a MySQL fork, and as mentioned in the article, MariaDB is run by the founder of MySQL, Michael ‘Monty’ Widenius, and he claims MariaDB is faster, more secure and has more features than MySQL. I’m actually very excited to know that the code is maintained by the same dedicated core team of MySQL in the past 18 years. They even bother to form a foundation, the MariaDB Foundation, to promote MariaDB. Already, there’s a lot of open source software officially supporting MariaDB, such as  Drupal, Jelastic – Java in the cloud, Kajona, MediaWiki, phpMyAdmin, Plone, SaltOs, WordPress, and Zend Framework. But the hosting service provider might not be readily supporting MariaDB in their hosting solution. Normal 0 false false false EN-GB X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;} Time will tell whether MariaDB would be the real replacement for MySQL, I’m sorry I don’t think I should use alternative here ;-) For more information, please visit MariaDB official site. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi; mso-fareast-language:EN-US;}

    Read the article

  • Framework for Everything - Where to begin? [Longer post]

    - by SquaredSoft
    Back story of this question, feel free to skip down for the specific question Hello, I've been very interested in the idea of abstract programming the last few years. I've made about 30 attempts at creating a piece of software that is capable of almost anything you throw at it. I've undertook some attempts at this that have taken upwards of a year, while getting close, never releasing it beyond my compiler. This has been something I've always tried wrapping my head around, and something is always missing. With the title, I'm sure you're assuming, "Yes of course you noob! You can't account for everything!" To which I have to reply, "Why not?" To give you some background into what I'm talking about, this all started with doing maybe a shade of gray hat SEO software. I found myself constantly having to create similar, but slightly different sets of code. I've gone through as many iterations of way to communicate on http as the universe has particles. "How many times am I going to have to write this multi-threaded class?" is something I found myself asking a lot. Sure, I could create a class library, and just work with that, but I always felt I could optimize what I had, which often was a large undertaking and typically involved frequent use of the CRTL+A keyboard shortcut, mixed with the delete button. It dawned on me that it was time to invest in a plugin system. This would allow me to simply add snippets of code. as time went on, and I could subversion stuff out, and distribute small chunks of code, rather than something that encompasses only a specific function or design. This comes with its own complexity, of course, and by the time I had finished the software scope for this addition, it hit me that I would want to add to everything in the software, not just a new http method, or automation code for a specific website. Great, we're getting more abstract. However, the software that I have in my mind comes down to a quite a few questions regarding its execution. I have to have some parameters to what I am going to do. After writing what the perfect software would do in my mind, I came up with this as a list of requirements: Should be able to use networking A "Macro" or "Expression system" which would allow people to do something like : =First(=ParseToList(=GetUrl("http://www.google.com?q=helloworld!"), Template.Google)) Multithreaded Able to add UI elements through some type of XML -- People can make their own addons etc. Can use third party API through the plugins, such as Microsoft CRM, Exchange, etc. This would allow the software to essentially be used for everything. Really, any task you wish to automate, in a simple way. Making the UI was as also extremely hard. How do you do all of this? Its very difficult. So my question: With so many attempts at this, I'm out of ideas how to successfully complete this. I have a very specific idea in my mind, but I keep failing to execute it. I'm a self taught programmer. I've been doing it for years, and work professionally in it, but I've never encountered something that would be as complex and in-depth as a system which essentially does everything. Where would you start? What are the best practices for design? How can I avoid constantly having to go back and optimize my software. What can I do to generalize this and draw everything out to completion. These are things I struggle with. P.s., I'm using c# as my main language. I feel like in this example, I might be hitting the outer limit of the language, although, I don't know if that is the case, or if I'm just a bad programmer. Thanks for your time.

    Read the article

  • How to upgrade a remote server from 8.10 to newer version?

    - by DisgruntledGoat
    I have a remote server still running Ubuntu 8.10 9.04 that I can only access via SSH. If I run apt-get update I get a bunch of 404 errors on the packages. I've asked a few questions on Server Fault but got nowhere. Here's what I've done: Run apt-get update which returns errors like: Err http://gb.archive.ubuntu.com intrepid/main Packages 404 Not Found [and same for many other packages] Run do-release-upgrade which returns: Checking for a new ubuntu release Failed Upgrade tool signature Failed Upgrade tool Done downloading extracting 'jaunty.tar.gz' Failed to extract Extracting the upgrade failed. There may be a problem with the network or with the server. Edited /etc/update-manager/release-upgrades and changed from Prompt=normal to Prompt=lts (as suggested here). Running do-release-upgrade after this returns: Checking for a new ubuntu release current dist not found in meta-release file No new release found (Updated) I have followed the advice in this question and changed /etc/apt/sources.list to refer to jaunty instead of intrepid. However, that distro is not online anymore either. A comment there says I have to upgrade in chronological order... So basically, it seems like I cannot upgrade because my current distro is out of date and not supported. Is there a way to upgrade direct to 10.x or 11.x? Note, as this is a server I only have command-line access. UPDATE 24/11: I have managed to upgrade from 8.10 to 9.04. Ubuntu's EOL Upgrades page provides some alternate URLs for apt sources. I also needed to update /var/lib/update-manager/meta-release to point to the old-releases server too. However, now I cannot upgrade from 9.04 to 9.10. Running do-release-upgrade produces the same error as #2 above, except it "Failed to fetch" (the URLs in meta-release are valid). The Ubuntu Jaunty upgrade page says it's necessary to upgrade using a CD image. I followed the instructions here, but it didn't work: A fatal error occurred Please report this as a bug and include the files /var/log/dist-upgrade/main.log and /var/log/dist-upgrade/apt.log in your report. The upgrade is now aborted. Your original sources.list was saved in /etc/apt/sources.list.distUpgrade. Traceback (most recent call last): File "/tmp/tmp.JLhTwVUugb/karmic", line 7, in sys.exit(main()) File "/tmp/tmp.JLhTwVUugb/DistUpgradeMain.py", line 132, in main if app.run(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1590, in run return self.fullUpgrade() File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 1506, in fullUpgrade if not self.doPostInitialUpdate(): File "/tmp/tmp.JLhTwVUugb/DistUpgradeController.py", line 762, in doPostInitialUpdate self.quirks.run("PostInitialUpdate") File "/tmp/tmp.JLhTwVUugb/DistUpgradeQuirks.py", line 83, in run for plugin in self.plugin_manager.get_plugins(condition): File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 167, in get_plugins filenames = self.get_plugin_files() File "/tmp/tmp.JLhTwVUugb/computerjanitor/plugin.py", line 120, in get_plugin_files basenames = [x for x in os.listdir(dirname) OSError: [Errno 2] No such file or directory: './plugins' It does say to report the bug, but since this is an old unsupported release I don't know if it's worth doing. However, is there a way round this, to upgrade from 9.04 to 9.10 (And then finally to 10.04 LTS.)

    Read the article

  • How much am I worth hourly as a software/web developer? [closed]

    - by luckysmack
    I may be starting a new job very soon as a developer for both web and desktop software. The primary languages I will be using is ASP.NET with C# with some php for existing projects(I've already had one interview which went very well). The job deals primarily in advertising. But this is my first real job in the market, I have no degrees, but have some college time(~1yr). So I am primarily self taught. They are fully aware of my skill set and lack of degrees or certificates. I applied as an entry level developer. It will be a permanent and full time/hourly position, and not a per contract job. So since it my cherry job, im not really sure what to ask for. even though im self taught im pretty confident in my skills and know what im doing fairly well. I pick up on new concepts very well and find new things fairly easy to learn. Here is a very brief summary of my skills: PHP: ~2years C#/.NET: 2 months Python: Basics only. ~1 month OOP Familiarity: Great (1 year) MVC Familiarity: Great (1 year) PHP Frameworks used: CakePHP(6 months), Yii(3 months), Lithium(3 months) CMS Familiar with: Drupal(1.5 years), Wordpress(only basics) I also have ~2yrs experience in maintaining my own VPS server and the hassles all that entails (linux/debian) Pretty much all the above will be used at this job. Although I will be using C# a vast majority of the time. I only recently started learning it but am moving along fairly rapidly and its all going smooth as butter. So what have I built? I have one proprietary site built in drupal which is used an an order log for products, inventory, and their shipments. It is also able to process payments through paypal merchant services. I have worked on a handful of other small apps used here and there I'm not able to show but which worked fairly well (all in php using frameworks though). The business does fairly well and is far from a a typical corporate type environment. It is much closer to a small development studio. And it is based out of northern California. I don't know how/what more info I can give on them. I also want this to be able to be referenced by other people possibly so I am looking for general tips and ideas to get an answer as well. I had trouble finding a reasonable range on other websites which seemed to be either way to low, or showed what a veteran developer makes. I know this is a fairly subjective question, but it is difficult to get a reasonable answer or guesstimate anywhere else. Even if only a little bit help, its much appreciated. So as for the direct question, based on all this info (did I miss anything?), how much should I ask for hourly? How much am I worth as a software developer?

    Read the article

  • Video works with 'Try me' but not after install. What is the difference? U12.04LTS,

    - by HarveyP
    My hard drive got corrupted so I did a reinstall. Tested Youtube in FF during 'try me' and it worked - jerky, but it worked. Instal without all the updates (576 outstanding now) in order to get ff installed as per the demo - to no avail. In 'try me' mode ff NEVER crashed! After install ff crashed whilst I was typing in 'youtube' in the address field. When I finally got to youtube - no video. What is the difference between ff in try me and ff after install? Off to try some selected updates now to see if I can see it for myself. In previous installation I had several profiles and aliased ff with -safe-mode switch to simplify startup of most stable ff. Also found that ff startup in graphic mode worked better (but still without video) with all of the extensions disabled and all of the plugins set to "ask" and always denied ... I have SiS graphic card in SiS Motherboard for XP and ancient Hyundai ImageQuest QV770 monitor. I have Ubuntu 12.04.01 LTS 1 day after install with only the immediate upgrades requested to language pack (English UK). Using FR Alternative keyboard. Connected with domestic wifi network from Orange (FT) I really want to use Skype, but won't bother installing it (again) without video as I can do my sms on FB - whilst ff is not crashed ... Update ... Is something overflowing? I have just had to reboot in order to get ff to restart in any way shape or form - restart on crash form generates new crash form, etc. It was however a good half hour before it crashed so some improvement over conditions before disk corruption. I have now installed all of the critical updates (332 recommended updates still outstanding) which included some relating to ff. Still no video. Still crashing - especially when on Grepolis website. Since the re-install I have had a lovely 1024x768 screen, but after last ff crash and reboot I got a message about 'low graphics mode' and 'setting things myself'. I was not sufficiently tuned in at the time to take proper note - I have no doubt I shall see it again and shall report accordingly. I still have only laptop options for my screen and do not know how to rectify this. Spent a few days with ubuntu on a different, newer machine which has now suffered a graphics breakdown. Returned to this old one again, but with new flat screen Monitor. Found SIS drivers for my graphics BUT it is intended for Red Hat 7.2. I chose this over the version for 7.0 because I thought what the hell, I might not be able to do anything with either of them but this is the later one ... The file will not open with software manager - found a similar problem on Overclock but it has not helped me to install this driver. File name is sis_drv.o-410 and it is currently idling away in my Downloads folder ... I have tried the solution offered on another sis problem, but this shows that my xserver-xorg-video-sis driver is up to date. I am now at a loss as to how to proceed if I can't install the latest sis driver from sis ... Does nobody know how FF changes from "try-me" to "installed"? Any time I MUST have video I reboot from the disk again, but this is tedious! Also one of the things I mock most about MS is the constant rebooting ... UPDATE 10/6/2014 I have installed chromium-browser - worse, crashes even more often than ff.I have installed epiphany - better; Video works but not the associated soundtrack.FireFox is version 14.01 in 'try me' and version 29.0 from my install. Would it be useful to try to downgrade FireFox in order to get video?

    Read the article

  • Can someone explain the true landscape of Rails vs PHP deployment, particularly within the context of Reseller-based web hosting (e.g., Hostgator)?

    - by rcd
    Currently, I have a reseller account with the company HostGator. I design websites, which up until now have occasionally been wrapped in Wordpress CMSs and the like (PHP applications). I then sell hosting (of the site I've designed) to the client, which is pretty simple, in that I can simply click a button and add a new shared hosting account/site with whatever settings I want. Furthermore, I then utilize WHMCS to automate billing and account management. It's a nice package and pretty simple. I pay something like $25 a month, and can sell a hundred accounts under this (because my clients bandwidth requirements are low). Now I am finding the need to develop more customized applications, including a minimalist CMS and several proprietary things. I soon anticipate developing these apps for clients as well. Thus, I've spent the past few months learning Rails, and it's coming along well now. The thing that has nagged at me all along, though, is the deployment issue. I can't wrap my brain around it. It seems like all of the popular options (Heroku, etc) have nice automation with git and are set up in the "Rails Way". I get that (sort of). But it's terribly expensive... a single dyno, a helper, and the cheapest database (which they say is mainly suitable for testing) that isn't limited to 5MB runs $51. This is for ONE app!!! Throw in a "production" DB and you're over $200. This is like... the same prices as getting a server somewhere, right? Meanwhile, going back to what I guess is a "traditional" hosting environment with Hostgator, their server only has Ruby 1.8.7 and Rails 2.3.5... No Rails 3. AND, no Passenger (not that I really understand the difference in CGI or mod_rails or whatever, but they say Passenger is the simplest). So I'm to understand that if I build an app in Rails 3, it won't run at all on this host? But damn, I already have these accounts under my reseller account there, all running static html and/or PHP stuff, right? So what now? How do I get all of this under one simple (and affordable) roof? Forgive my ignorance, but I just don't get it. Managing a VPS is cool and all, but entails learning server admin stuff and security... And it's expensive. I get that a shared and/or reseller "server-based" (forgive the terminology) may be inadequate for large-scale apps that use a lot of bandwidth... But what about for those of us who are building real (but small and low bandwidth) apps (with Rails) and who want to deploy them simply, cheaply, using the same conceptual approach as PHP? Even after learning all of this Ruby and Rails stuff for months, I'm questioning whether it's worth it when it comes to deployment. I want to build a small app, upload it to my home directory on a shared server account, and just make it run. Why should that be so hard? Am I just choosing the wrong language/framework? Forgive my ignorance in the subject; these questions are not rhetorical; just trying to learn here. So: 1) I'd appreciate if someone could give me a good rundown of how to understand deployment in Rails vs. PHP. 2) I'd appreciate if someone could address my issue with running a hosting/web business around reseller hosting (Hostgator) while also being able to host Rails apps. Can it be done? And how can a company like Hostgator completely ignore what's current in Rails/Ruby? Thanks.

    Read the article

  • IBM Server Config questions

    - by Joel Coel
    I have a few questions on a potential server setup. First, the situation: Last year we bought an IBM x3500 server with 2 Xeon E5410's, 9GB RAM, 6 HDDs. The original intent for this server was to replace the old exchange e-mail server. It was brought in, set up, and then shortly after we switched to gmail. Shortly after that my predecessor left for greener pastures, and finally I was hired. So this nice server is now sitting (mostly) idle. This year I have budget again for one server, and of course I want to put this other server to work. I'm thinking about the best use for the two server, and I think I finally have a plan for what I want to do with them. The idea is to use the two newer servers as a pair of VM hosts. I will set up each server with the same 8 VMs, but divide up the load so that only 4 are active per physical host. That means I've normally got 2GB RAM + 2 cores per host. I've done some load testing to pick out what servers to convert to virtual, and chose them so that each host will be capable of handling the entire set of 8 by itself in a pinch with 1 core and 1GB RAM, but would be very taxed to do so. This should take our data center from 13 total servers down to 7. The "servers" I'm replacing are mostly re-purposed desktops, so I'm more than happy to be able to do this. Now it's time to go shopping for the new server. I'd like my two hosts to match as closely as possible, and so I'm looking at IBM again. It also helps that we have some educational matching grant money from IBM that I need to use to help pay for this system (we're a small private college). So finally, (if you're not bored already), we come to my questions: Am I missing anything big or obvious in this plan? I'm a little worried about network performance since the VM hosts will only have 4 nics total where 8 used to be, but I don't think it will be a problem. Is there anything else like this I might be overlooking? Am I making it even too complicated? IBM no longer has a good analog to last year's server. If I want to match the performance (8 cores, 9GB RAM, 1333mhz front side bus, 6 spindles), I have to spend quite a bit more than we paid last year: $2K+, or nearly a 33% cost increase. This only brings a marginal increase in performance. The alternative to stay in budget is to take a hit on the fsb down to 800mhz or cut the number of cores in half, neither of which is attractive. The main cost culprit is the processor. IBM no longer offers the E5410. It's listed as a part, but not available in any of the server configs I've looked at. I'm considering getting the cheapest 800mhz fsb dual core xeon I can configure and then buying the E5410's separately. That's still an extra $350 I wasn't counting on, but that's better than $2K. I want to know what others think of this - will it work or will I end up with the wrong motherboard or some other issue? Am I missing a simple way to configure the server I really want? I don't really intend to do this, but one option to save some money back is to omit the redundant power supply. Since my redundancy plan for these system is to switch over to a completely different host, the extra power isn't fully necessary. That said, it's still very helpful to avoid even short downtimes while I switch over VMs. Has anyone done this?

    Read the article

  • How to set up a centralized backup server with lots of offsite workstations, intermittent internet connectivity, and stubborn users?

    - by Zac B
    This might be an impossible question. Context: We have a bunch of computers across around 1000 users. We have a centralized office where 900 of the users work, most of the time. Most of the computers are laptops. They are very frequently coming on and off the network for hours at a time. Users often take their computers home and do lots of work from home. In addition, there are a handful of users who work elsewhere in the country, who are offline (no internet connection whatsoever) for more than half of the time they use their machines. All of the machines are Windows 7/XP. Problem: People are always losing data. One day someone accidentally deletes a bunch of files. The next day someone else installs a bad driver or tries to mess with something in system32 and needs a personal data backup/reinstall of Windows. Because of how many of our business operations are done without an internet connection, and how frequently computers come on- and offline, it's unfeasible to make users use network storage for all of their data. We tried giving them Dropboxes, and they stored their files elsewhere. We bought and deployed Altiris, and they uninstalled it and blamed us when they couldn't get files back that they accidentally deleted while they were offline and hadn't taken a backup in months. We tried teaching them backup best-practices, and using scheduled sync tools to upload things to the network drives, and they turned them off because they "looked like viruses". It doesn't help that many of these users are pretty high up in the business and are not amicable to any sort of "you need to do something regularly because we say so" solution. Question: Other than finding another job where IT is treated differently and users are willing to follow best practices, how would people recommend I implement a file backup solution that supports the following: Backs up to a centralized server over LAN or WAN whenever a network link becomes available, or on a schedule. Supports interrupted/resumed backups (and hopefully file-delta only backups), since connections to the network (WAN or LAN) are often slow and only open for half an hour or so. Supports relatively rapid, "I accidentally deleted the TPS reports! Oh no!" single-file recovery, ideally administered from the central backup server rather than the client PC. Supports local-to-local file delta backup on a schedule, so that users without a network connection for a few days can still retrieve accidental deletions or whatnot. Ideally, the local stored backups would be pushed up to the server whenever network link is available. Isn't configurable on the clients without certain credentials. Because the CFOs (who won't give up their admin rights on the domain) will disable it if they can. Backs up the entire hard drive. There are people who are self-righteous about storing things in C:\, or in the recycle bin, or in the C:\Windows dir (yes, I know). I'm fine integrating multiple products/solutions, or scripting different programs together myself (I'm a somewhat competent programmer), but I've been drawing a blank on where to start. Dropbox is folder-specific, Altiris doesn't cope with LAN outages or interrupted/resumed backups, Volume Shadow Copy is awesome for a local-to-local solution, but I don't know how to push days of stored shadow copies up to a server in a 2 hour window of network access. The company is fine with spending decent money on this, thousands (USD) on a server, and hundreds on clients, if necessary. I want to emphasize that this isn't a shopping list request. While I wish there was a program out there that did what I want, I've looked pretty hard, and not found anything that fits the bill. Instead, I'm hoping for ideas on where to start hacking things together from scratch/from different technologies to make something stable that works. Cheers!

    Read the article

  • Hardware recommendations / parts list for a modern, quiet ZFS NAS box - 2011-Feb edition [closed]

    - by dandv
    I want to build some really reliable storage for my data, and it seems that ZFS is the only filesystem at the moment that does live checksumming. That rules out DroboPro, so I'm looking to building a quiet ZFS NAS that would start with 4 2TB or larger hard drives. I'd like this system to be very reliable and relatively future-proof for 2-3 years, so I'm willing to invest some $$$ and buy higher end components. I did see questions here and on other forums about low-cost servers, but I'm not looking for those. I'd be super happy to go for an off-the-shelf solution, but I haven't found one that's quiet. I started doing the research (summarized on my wiki), but I realized that it just gets too complicated for what I know as a software dude, and I'm entering the analysis paralysis area. At this point, I'm basically looking for a parts list for a configuration that will work (and is modern), and I know there are folks around here who are way more competent than me. I've built computers and am comfortable assembling one and messing with *nix; I can follow guides; I just want to end the decision process for the hardware and software configuration. What I've researched so far (not that these are very modern components): Case: I think I've settled on the Antec Twelve Hundred case because it cools well, is quiet, and simply has 12 bays that allow elastic mounting. The SilverStone Raven is its counter-candidate, but I find its construction quite odd. For the PSU, I'm torn between Antec CP-850 and Nexus RX-8500, but I did this research more than a year ago. The Nexus has a very uniform power profile, and I'd rather not have the Antec spin up and down based on load. On the other hand, I'm not sure how often my file server will draw more than 400W under use. For the hard drives, I've read that WD Black drives are actually WD RE3 with a software setting changed. I'd also like to buy different drive types, not just 4 WDs. Recommendations? Right now I have a 2TB Hitachi Deskstar 7K300. For the motherboard, CPU and RAM I have no idea, other than the RAM must be ECC. I already asked a question here about ECC RAM, but I was misguided and was looking for a motherboard that would support USB 3.0 as well. I've learned to go with eSATA, or worry about USB later. Then there's the (liquid) cooling, Wi-Fi card, and FreeBSD vs. OpenSolaris Express. Lastly, I'm wondering if I can make this PC into a media server by adding a Blu-ray drive and a good sound card. But support for Blu-Ray is spotty on Linux, and I don't know if Windows 7 on VirtualBox would get sufficient hardware access to output HDMI or SPDIFF signals. (Running OpenSolaris virtualized is not an option because of the reliability risk.) Then there are HDCP concerns. Suggestions on that would be appreciated as well, but I don't want us to get sidetracked. A specific shopping list on the core components would be great, so I can start ordering, and in the meantime educate myself with regards to the other issues. Finally, I think this could become a great FAQ for those technically inclined to build their own ZFS server, but confused by the dizzying array of options out there, and I promise to compile the results and share my experience building and benchmarking the server.

    Read the article

  • Symantec Protection Suite and System Recovery 2011 Desktop Edition

    - by rihatum
    I am re-posting this as my previous question was being treated as if I am "Shopping or seeking Product Recommendations" even though I was NOT - BTW they have deleted my comments too which were not offensive in nature. anyway - I have re-phrased some parts of my question and I hope SF Admins "Do Not Modify / Edit" this one - will be most grateful for that. I have a lot of respect for the People who visit this SITE and help others ! Just To clarify : Just to go by SF rules - I am not seeking someone to Design this solution, I am simply seeking real world examples, experiences, technical expert opinions / suggestions, any tips or tricks they may have or any problems they may have faced while doing something similar above with these products. I am also not asking for Capacity Planning for Storage, We have done some research and I am seeking Expert Assurance / Suggestions. We (our company) are planning to deploy Symantec Endpoint Protection and Symantec Desktop Recovery 2011 Desktop Edition to our 3000 - 4000 workstations (Windows7 32 and 64) with a few 100s with Windows XP 32/64 Bit. I have read the implementation guide for SEP and have read tech-notes for Desktop Recovery 2011. Our team have planned to deploy this as follows : 1 x dedicated SQL 2008R2 for Symantec Endpoint Protection (Instead of using the Embedded Database) 1 x Dedicated SQL 2008R2 for Symantec Desktop Recovery 2011 (Instead of using the Embedded Database) 1 x Dedicated W2K8 R2 Box for the SEPM (Symantec Endpoint Protection Manager - Mgmt. APP) 1 x Dedicated W2K8 R2 Box for the Symantec Desktop Recovery 2011 Management Application Agent Deployment : As per Symantec Documentation for both of the above, an agent can be pushed via the Mgmt. Application (provided no firewalls are blocking ports required etc. - we have Windows firewall disabled already). Server Hardware : Per SQL Server : 16GB RAM + SAS DISKS + Dual XEON, RAID-10 for the SQL DB or I can always mount a LUN from our existing Hitachi or EMC SAN. SEPM Server : 16GB RAM + SAS DISKS + DUAL XEON System Recovery MGMT SERVER : 16GB RAM + SAS DISKS + DUAL XEON Above is the initial plan we have for 3000 - 4000 client workstation (Windows) Now my Questions :-) a) If we had these users distributed amongst two sites with AD DC / GC in each site, How would I restrict SEPM and Desktop Mgmt. solution to only check for users in their respective site ? b) At present all users are under one building but we are going to move some dept. to a new location (with dedicated connectivity), How would we control which SEPM / MGMT Server is responsible for which site ? c) We have netbackup in our environment backing up other servers, I am planning to protect these 4 (2 x SQL, 1 x SEPM, 1 x System Recovery Mgmt. Server) via netbackup or I can use System recovery 2011 server edition on all 4 of these boxes as well. (License is not an issue as we have the complete symantec portfolio included in our license). d) Now - Saving Desktop backups - What strategies have you implemented ? Any best practice recommendation for a large user base ? I was thinking to either mount a LUN from our Hitachi SAN on the Symantec Recovery Server itself or backup to the users hard drive locally and then copy it over to a network location ? Suggestions welcome :-) If you have anything to add / correct - that will be really helpful before diving into the actual implementation phase. Will be most grateful with your suggestions, recommendations and corrections with above - Many Thanks !

    Read the article

  • Java jasper report NullPointerException

    - by William Welch
    I am new to Java and I am running into this issue that I can't figure out. I inherited this project and I have the following code in one of my scriptlets: DefaultLogger.logMessage("DEBUG path: "+ reportFile.getPath()); DefaultLogger.logMessage("DEBUG parameters: "+ parameters); DefaultLogger.logMessage("DEBUG jr: "+ jr); bytes = JasperRunManager.runReportToPdf(reportFile.getPath(), parameters, jr); And I am getting the following results (the fourth line there is line 287 in FootwearReportsServlet.doGet): DEBUG path: C:\Development\JavaWorkspaces\Workspace1\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\RSLDevelopmentStandard\reports\templates\footwear\RslFootwearReport.jasper DEBUG parameters: {class_list_subreport=net.sf.jasperreports.engine.JasperReport@b07af1, signature_path=images/logo_reports.jpg, submission_id=20070213154168780, test_results_subreport=net.sf.jasperreports.engine.JasperReport@5795ce, logo_path=images/logos_3.gif, report_connection_secondary=com.mysql.jdbc.JDBC4Connection@2c39d2, testing_packages_subreport=net.sf.jasperreports.engine.JasperReport@1883d5f, signature_path2=images/logo_reports.jpg, tpid_subreport=net.sf.jasperreports.engine.JasperReport@17531fd, first_page_subreport=net.sf.jasperreports.engine.JasperReport@12504e0} DEBUG jr: net.sf.jasperreports.engine.data.JRMapCollectionDataSource@1630eb6 Apr 29, 2010 4:15:13 PM org.apache.catalina.core.StandardWrapperValve invoke SEVERE: Servlet.service() for servlet FootwearReportsServlet threw exception java.lang.NullPointerException at net.sf.jasperreports.engine.fill.JRFiller.fillReport(JRFiller.java:89) at net.sf.jasperreports.engine.JasperFillManager.fillReport(JasperFillManager.java:601) at net.sf.jasperreports.engine.JasperFillManager.fillReport(JasperFillManager.java:517) at net.sf.jasperreports.engine.JasperRunManager.runReportToPdf(JasperRunManager.java:385) at com.rsl.reports.FootwearReportsServlet.doGet(FootwearReportsServlet.java:287) at javax.servlet.http.HttpServlet.service(HttpServlet.java:617) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:852) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:588) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Unknown Source) What I can't figure out is where the null reference is. From the debug lines I can see that each parameter has a value. Could it be referring to a bad path to one of the images? Any ideas? For some reason my server won't start in debug mode in Eclipse so I am having trouble figuring this out.

    Read the article

  • Unable to "Run on Server" a webapp from Eclipse

    - by Ram
    When running my WebApp project from Eclipse most of times it run correctly. But if by mistake to stop server, I kill it in "Console" view on instead of "Stop" Server from "Servers" View. While running clean project I get this java.lang.NullPointerException at org.eclipse.wst.common.componentcore.internal.util.VirtualReferenceUtilities.getDefaultProjectArchiveName(VirtualReferenceUtilities.java:81) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getJavaClasspathReferences(J2EEModuleVirtualComponent.java:332) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getNonManifestRefs(J2EEModuleVirtualComponent.java:236) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getReferences(J2EEModuleVirtualComponent.java:160) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getReferences(J2EEModuleVirtualComponent.java:208) at org.eclipse.jst.j2ee.componentcore.J2EEModuleVirtualComponent.getReferences(J2EEModuleVirtualComponent.java:201) at org.eclipse.jst.common.internal.modulecore.SingleRootUtil.hasConsumableReferences(SingleRootUtil.java:217) at org.eclipse.jst.common.internal.modulecore.SingleRootUtil.validateSingleRoot(SingleRootUtil.java:165) at org.eclipse.jst.common.internal.modulecore.SingleRootUtil.isSingleRoot(SingleRootUtil.java:93) at org.eclipse.jst.common.internal.modulecore.SingleRootExportParticipant.canOptimize(SingleRootExportParticipant.java:84) at org.eclipse.wst.common.componentcore.internal.flat.FlatVirtualComponent.canOptimize(FlatVirtualComponent.java:136) at org.eclipse.wst.common.componentcore.internal.flat.FlatVirtualComponent.cacheResources(FlatVirtualComponent.java:118) at org.eclipse.wst.common.componentcore.internal.flat.FlatVirtualComponent.fetchResources(FlatVirtualComponent.java:101) at org.eclipse.wst.web.internal.deployables.FlatComponentDeployable.members(FlatComponentDeployable.java:147) at org.eclipse.wst.server.core.internal.ModulePublishInfo.hasDelta(ModulePublishInfo.java:418) at org.eclipse.wst.server.core.internal.ServerPublishInfo.hasDelta(ServerPublishInfo.java:443) at org.eclipse.wst.server.core.internal.Server.hasPublishedResourceDelta(Server.java:1539) at org.eclipse.wst.server.core.internal.Server$ResourceChangeJob$1.visit(Server.java:214) at org.eclipse.wst.server.core.internal.Server.visitModule(Server.java:2929) at org.eclipse.wst.server.core.internal.Server.visit(Server.java:2913) at org.eclipse.wst.server.core.internal.Server$ResourceChangeJob.run(Server.java:225) at org.eclipse.core.internal.jobs.Worker.run(Worker.java:54) And while launching I get this SEVERE: Error starting static Resources java.lang.IllegalArgumentException: Document base D:\workspace\.metadata\.plugins\org.eclipse.wst.server.core\tmp0\wtpwebapps\Webapp does not exist or is not a readable directory at org.apache.naming.resources.FileDirContext.setDocBase(FileDirContext.java:142) at org.apache.catalina.core.StandardContext.resourcesStart(StandardContext.java:4319) at org.apache.catalina.core.StandardContext.start(StandardContext.java:4488) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardHost.start(StandardHost.java:840) at org.apache.catalina.core.ContainerBase.start(ContainerBase.java:1053) at org.apache.catalina.core.StandardEngine.start(StandardEngine.java:463) Please help recover from this.

    Read the article

  • How to troubleshoot errors with TeamCity

    - by Tomas Lycken
    I'm following this guide to set up a small environment for source control and automated builds - mostly for learning what it is and how it works, but also for using in those of my hobby projects that I believe will actually be useful some day. However, at the step where he commits and builds, I fail to get a success status in the TeamCity history log. I keep getting the error described in the stack trace below. I have verified with Windows Explorer that the solution file it can't find is actually there, so I really don't know what to do. How do I fix/troubleshoot this? [15:16:06]: Checking for changes [15:16:08]: Clearing temporary directory: C:\Program Files\JetBrains\BuildAgent\temp\buildTmp [15:16:08]: Checkout directory: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:08]: Updating sources: server side checkout... [15:16:08]: [Updating sources: server side checkout...] Building incremental patch for VCS root: DemoProjects [15:16:09]: [Updating sources: server side checkout...] Repository sources transferred [15:16:09]: [Updating sources: server side checkout...] Updating C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:10]: Start process: "c:\Program Files\JetBrains\BuildAgent\bin\..\plugins\dotnetPlugin\bin\JetBrains.BuildServer.MsBuildBootstrap.exe" "/workdir:C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588" /msbuildPath:C:\Windows\Microsoft.NET\Framework\v4.0.30319\MSBuild.exe [15:16:10]: in: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588 [15:16:11]: TeamCity MSBuild bootstrap v5.1 Copyright (C) JetBrains s.r.o. [15:16:11]: Application failed with internal error: [15:16:11]: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: System.Exception: Failed to find project file at path: C:\Program Files\JetBrains\BuildAgent\work\72d50012f70c4588\Nehemia\trunk\Nehemiah.sln [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Impl.MSBuildBootstrapFactory.Create(IClientRunArgs args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap.Core\src\Impl\MSBuildBootstrapFactory.cs:line 25 [15:16:11]: at JetBrains.BuildServer.MSBuildBootstrap.Program.Run(String[] _args) in c:\Agent\work\6223f0c8b1d45aaa\src\MSBuildBootstrap\src\Program.cs:line 66 [15:16:11]: Process exited with code -11 [15:16:11]: Build finished

    Read the article

  • Using Perl WWW::Facebook::API to Publish To Facebook Newsfeed

    - by Russell C.
    We use Facebook Connect on our site in conjunction with the WWW::Facebook::API CPAN module to publish to our users newsfeed when requested by the user. So far we've been able to successfully update the user's status using the following code: use WWW::Facebook::API; my $facebook = WWW::Facebook::API->new( desktop => 0, api_key => $fb_api_key, secret => $fb_secret, session_key => $query->cookie($fb_api_key.'_session_key'), session_expires => $query->cookie($fb_api_key.'_expires'), session_uid => $query->cookie($fb_api_key.'_user') ); my $response = $facebook->stream->publish( message => qq|Test status message|, ); However, when we try to update the code above so we can publish newsfeed stories that include attachments and action links as specified in the Facebook API documentation for Stream.Publish, we have tried about 100 different ways without any success. According to the CPAN documentation all we should have to do is update our code to something like the following and pass the attachments & action links appropriately which doesn't seem to work: my $response = $facebook->stream->publish( message => qq|Test status message|, attachment => $json, action_links => [@links], ); For example, we are passing the above arguments as follows: $json = qq|{ 'name': 'i\'m bursting with joy', 'href': ' http://bit.ly/187gO1', 'caption': '{*actor*} rated the lolcat 5 stars', 'description': 'a funny looking cat', 'properties': { 'category': { 'text': 'humor', 'href': 'http://bit.ly/KYbaN'}, 'ratings': '5 stars' }, 'media': [{ 'type': 'image', 'src': 'http://icanhascheezburger.files.wordpress.com/2009/03/funny-pictures-your-cat-is-bursting-with-joy1.jpg', 'href': 'http://bit.ly/187gO1'}] }|; @links = ["{'text':'Link 1', 'href':'http://www.link1.com'}","{'text':'Link 2', 'href':'http://www.link2.com'}"]; The above, nor any of the other representations we tried seem to work. I'm hoping some other perl developer out there has this working and can explain how to create the attachment and action_links variables appropriately in Perl for posting to the Facebook news feed through WWW::Facebook::API. Thanks in advance for your help!

    Read the article

  • solution for RPC_E_ATTEMPTED_MULTITHREAD error caused by SPRequestContext caching SPSites?

    - by kerray
    Hi, I'm developing a solution for SharePoint 2007, and I'm using SPSecurity.RunWithElevatedPrivileges a lot, passing in UserToken of the SystemAccount. After reading http://hristopavlov.wordpress.com/2009/01/19/understanding-sharepoint-sprequest/ I finally began to understand why I get these System.Runtime.InteropServices.COMException (0x80010102): Attempted to make calls on more than one thread in single threaded mode. (Exception from HRESULT: 0x80010102 (RPC_E_ATTEMPTED_MULTITHREAD)) errors, but there seems to be no solution - "known issue in the product" The article is more then a year old. I wasn't able to find anything more recent and helpful, but I was hoping maybe someone else has? My code goes like this SPSecurity.RunWithElevatedPrivileges(delegate() { using (SPSite elevatedSite = new SPSite(web.Site.ID, web.Site.SystemAccount.UserToken)) { using (SPWeb elevatedWeb = elevatedSite.OpenWeb(web.ID)) { // some operations on lists and items obtained through elevatedWeb } } } The errors come up wherever such an elevated code is used, and more often when there are more users who use these functionalities, so I guess perhaps the elevated SPSite is getting cached and reused. Is there any way to solve this? If my understanding is correct, how to make Sharepoint forget about the cached SPSites, and use a fresh one instead? Thanks

    Read the article

  • Return template as string - Django

    - by Ninefingers
    Hi All, I'm still not sure this is the correct way to go about this, maybe not, but I'll ask anyway. I'd like to re-write wordpress (justification: because I can) albeit more simply myself in Django and I'm looking to be able to configure elements in different ways on the page. So for example I might have: Blog models A site update message model A latest comments model. Now, for each page on the site I want the user to be able to choose the order of and any items that go on it. In my thought process, this would work something like: class Page(models.Model) Slug = models.CharField(max_length=100) class PageItem(models.Model) Page = models.ForeignKey(Page) ItemType = models.CharField(max_length=100) InstanceNum = models.IntegerField() # all models have primary keys. Then, ideally, my template would loop through all the PageItems in a page which is easy enough to do. But what if my page item is a site update as opposed to a blog post? Basically, I am thinking I'd like to pull different item types back in different orders and display them using the appropriate templates. Now, I thought one way to do this would be to, in views.py, to loop through all of the objects and call the appropriate view function, return a bit of html as a string and then pipe that into the resultant template. My question is - is this the best way to go about doing things? If so, how do I do it? If not, which way should I be going? I'm pretty new to Django so I'm still learning what it can and can't do, so please bear with me. I've checked SO for dupes and don't think this has been asked before...

    Read the article

  • Greasemonkey @require jQuery not working "Component not available"

    - by Greg K
    I've seen the other question on here about loading jQuery in a Greasemonkey. Having tried that method, with this require statement inside my ==UserScript== tags: // @require http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js I still get the following error message in Firefox's error console: Error: Component is not available Source File: file:///Users/greg/Library/Application%20Support/ Firefox/Profiles/xo9xhovo.default/gm_scripts/myscript/jquerymin.js Line: 36 This stops my greasemonkey code from running. I've made sure I included the @require for jQuery and saved my js file before installing it, as required files are only loaded on installation. Code: // ==UserScript== // @name My Script // @namespace http://www.google.com // @description My test script // @include http://www.google.com // @require http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js // ==/UserScript== GM_log("Hello"); I have Greasemonkey 0.8.20091209.4 installed on Firefox 3.5.7 on my Macbook Pro, Leopard (10.5.8). I've cleared my cache (except cookies) and have disabled all other plugins except Flashblock 1.5.11.2, Web Developer 1.1.8 and Adblock Plus 1.1.3. My config.xml with my Greasemonkey script installed: <UserScriptConfig> <Script filename="myscript.user.js" name="My Script" namespace="http://www.google.com" description="My test script" enabled="true" basedir="myscript"> <Include>http://www.google.com</Include> <Require filename="jquerymin.js"/> </Script> I can see jquerymin.js sat in the gm_scripts/myscript/ directory. Additionally, is it common for this error to occur in the console when installing a Greasemonkey script? Error: not well-formed Source File: file:///Users/Greg/Documents/myscript.user.js Line: 1, Column: 1 Source Code: // ==UserScript==

    Read the article

  • How to exclude jars generated by maven war plugin ?

    - by Jacques René Mesrine
    Because of transitive dependencies, my wars are getting populated by xml-apis, xerces jars. I tried following the instructions on the reference page for maven-war-plugin but it is not working. <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-war-plugin</artifactId> <configuration> <packagingExcludes>WEB-INF/lib/xalan-2.6.0.jar,WEB-INF/lib/xercesImpl-2.6.2.jar,WEB-INF/lib/xml-apis-1.0.b2.jar,WEB-INF/lib/xmlParserAPIs-2.6.2.jar</packagingExcludes> <webXml>${basedir}/src/main/webapp/WEB-INF/web.xml</webXml> <warName>project1</warName> <warSourceDirectory>src/main/webapp</warSourceDirectory> </configuration> </plugin> What am I doing wrong ? If it matters, I discovered that the maven-war-plugin I'm using is at version 2.1-alpha-1

    Read the article

  • Ruby XMLRPC::CLIENT issue under Rails 2.3.5 with Ruby 1.8.6

    - by TallGreenTree
    I'm trying to implement a ping to an xmlrpc server written in PHP under CodeIgniter. The system currently works as expected when receiving pings from WordPress-based blogs, but I'm unable to generate a successful ping from Rails. The code I'm using to generate the ping is: server = XMLRPC::Client.new("digital466.local", "/schafercondoncarter/xmlrpc") result = server.call_async("ping", BLOG_CONFIG['title'], "http://digital466.local:3000") The code runs fine in Console, and retrieves a correctly formatted response from the server, but when I integrate the code into a rails observer or controller, I receive the following error message: Timeout::Error in PostsController#update execution expired RAILS_ROOT: /Users/digital466/Sites/sccblog Application Trace | Framework Trace | Full Trace /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/timeout.rb:60:in `rbuf_fill' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/protocol.rb:132:in `rbuf_fill' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/protocol.rb:116:in `readuntil' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/protocol.rb:126:in `readline' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:2020:in `read_status_line' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:2009:in `read_new' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:1050:in `request' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:992:in `post2' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/xmlrpc/client.rb:529:in `do_rpc' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/net/http.rb:543:in `start' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/xmlrpc/client.rb:528:in `do_rpc' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/xmlrpc/client.rb:435:in `call2_async' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/xmlrpc/client.rb:425:in `call_async' /Users/digital466/Sites/sccblog/app/controllers/posts_controller.rb:89:in `update' Any assistance would be appreciated. Thanks in advance.

    Read the article

  • jquery load and tinymce gives me a "g.win.document is null" on subsequent loads

    - by Patrice Peyre
    So... very excited as first post on stackoverflow I have this page, clicking a button will call editIEPProgram(). editIEPProgram counts how many specific divs exists, adds another div, loads it with html from '/content/mentoring/iep_program.php' and then calls $("textarea.tinymce").tinymce to add a rich text editor to the newly added text boxes. It finally adds a tab which handles the new stuff and bob is your uncle. It ALMOST works. Clicking the button the first adds everything and everything woks, on the second and subsequent click, everything is added but the tinymce fails to initiate (I guess) and gives me "g.win.document is null". I have been on it for some time now and losing all my sanity. please, help.me. function editIEPProgram(){ var index = $("div[id^=iep_program]").size(); var target_div = "iep_program_"+index; $("#program_container") .append( $("<div></div>") .attr("id", target_div) .addClass("no_overflow_control") ); $("#"+target_div).load ( "/content/mentoring/iep_program.php", {index:index}, function() { $("textarea.tinymce").tinymce({ script_url: "/scripts/tiny_mce.js",height:"60", theme:"advanced",skin:"o2k7",skin_variant : "silver", plugins : "spellchecker,advlist,preview,pagebreak,style,layer,save,advhr,advimage,advlink,searchreplace,paste,noneditable,visualchars,nonbreaking,xhtmlxtras", theme_advanced_buttons1 : "spellchecker,|,bold,italic,underline,strikethrough,|,formatselect,|,forecolor,backcolor,|,bullist,numlist,|,outdent,indent,|,undo,redo,|,link,unlink,image,cleanup,preview", theme_advanced_buttons2 : "", theme_advanced_buttons3 : "", theme_advanced_toolbar_location : "top", theme_advanced_toolbar_align : "left", content_css : "/css/main.css" }); }); $("#tab_container") .tabs("add", "#"+target_div,"New program") .tabs("select", $("#tab_container").tabs("length")-1); }

    Read the article

  • SQL Server Management Studio – tips for improving the TSQL coding process

    - by kristof
    I used to work in a place where a common practice was to use Pair Programming. I remember how many small things we could learn from each other when working together on the code. Picking up new shortcuts, code snippets etc. with time significantly improved our efficiency of writing code. Since I started working with SQL Server I have been left on my own. The best habits I would normally pick from working together with other people which I cannot do now. So here is the question: What are you tips on efficiently writing TSQL code using SQL Server Management Studio? Please keep the tips to 2 – 3 things/shortcuts that you think improve you speed of coding Please stay within the scope of TSQL and SQL Server Management Studio 2005/2008 If the feature is specific to the version of Management Studio please indicate: e.g. “Works with SQL Server 2008 only" Thanks EDIT: I am afraid that I could have been misunderstood by some of you. I am not looking for tips for writing efficient TSQL code but rather for advice on how to efficiently use Management Studio to speed up the coding process itself. The type of answers that I am looking for are: use of templates, keyboard-shortcuts, use of IntelliSense plugins etc. Basically those little things that make the coding experience a bit more efficient and pleasant. Thanks again

    Read the article

  • R and version control for the solo data analyst

    - by Jeromy Anglim
    Many data analysts that I respect use version control. For example: http://github.com/hadley/ See comments on http://permut.wordpress.com/2010/04/21/revision-control-statistics-bleg/ However, I'm evaluating whether adopting a version control system such as git would be worthwhile. A brief overview: I'm a social scientist who uses R to analyse data for research publications. I don't currently produce R packages. My R code for a project typically includes a few thousand lines of code for data input, cleaning, manipulation, analyses, and output generation. Publications are typically written using LaTeX. With regards to version control there are many benefits which I have read about, yet they seem to be less relevant to the solo data analyst. Backup: I have a backup system already in place. Forking and rewinding: I've never felt the need to do this, but I can see how it could be useful (e.g., you are preparing multiple journal articles based on the same dataset; you are preparing a report that is updated monthly, etc) Collaboration: Most of the time I am analysing data myself, thus, I woudln't get the collaboration benefits of version control. There are also several potential costs involved with adopting version control: Time to evaluate and learn a version control system A possible increase in complexity over my current file management system However, I still have the feeling that I'm missing something. General guides on version control seem to be addressed more towards computer scientists than data analysts. Thus, specifically in relation to data analysts in circumstances similar to those listed above: Is version control worth the effort? What are the main pros and cons of adopting version control? What is a good strategy for getting started with version control for data analysis with R (e.g., examples, workflow ideas, software, links to guides)?

    Read the article

  • How to implement facebook like button

    - by vamsivanka
    I am trying to implement facebook like button on my website. The first four lines in the code is already there on my site after the end of the "" tag. To implement the "Like button" i have added the second script (Line five to the end) and ran the application. Its giving me an error as "Microsoft Jscript runtime error:'_onLoad' is null or not an object" Please Let me know. Thanks <script type="text/javascript" src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php"></script> <script type="text/javascript"> FB.init("myapikey", "xd_receiver.htm", { "reloadIfSessionStateChanged": true }); </script> <script type="text/javascript"> window.fbAsyncInit = function() { FB.init({appId: 'myappid', status: true, cookie: true, xfbml: true}); }; (function() { var e = document.createElement('script'); e.async = true; e.src = document.location.protocol + '//connect.facebook.net/en_US/all.js'; document.getElementById('fb-root').appendChild(e); }()); </script> References: http://developers.facebook.com/docs/reference/plugins/like <fb:like href="http://webclip.in" layout="standard" show-faces="true" width="450" action="like" font="arial" colorscheme="light"/>

    Read the article

< Previous Page | 271 272 273 274 275 276 277 278 279 280 281 282  | Next Page >