Search Results

Search found 10622 results on 425 pages for 'shared hosting'.

Page 202/425 | < Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >

  • How to organise projects with dependencies on BitBucket?

    - by Timwi
    Both Mercurial and BitBucket make one fundamental assumption: 1 repo = 1 project. If I have a project that has a dependency (a library) which is shared by many projects, this assumption gets in the way. Now it is no longer possible to have a separate BitBucket page for each project while still being able to commit atomic revisions to multiple projects. If I put all the projects into one repo, they all become one “project” on BitBucket. If I put them in separate repos, it is no longer possible to know which version of the library project was in use at revision X of a dependent project. How is this situation normally solved on BitBucket, or is there explicitly no support for this common scenario?

    Read the article

  • Samba installation failed on ubuntu 12.10

    - by Ayyaz
    I was trying to install samba to access shared printer on windows pc connected via office network, following response came out from terminal. please guide me how to install Samba or any other alternative. crm@crm-HP-G62-Notebook-PC:~$ sudo apt-get install samba [sudo] password for crm: Reading package lists... Done Building dependency tree Reading state information... Done Some packages could not be installed. This may mean that you have requested an impossible situation or if you are using the unstable distribution that some required packages have not yet been created or been moved out of Incoming. The following information may help to resolve the situation: The following packages have unmet dependencies: samba : Depends: samba-common (= 2:3.6.6-3ubuntu4) but 2:3.6.6-3ubuntu5 is to be installed Depends: libwbclient0 (= 2:3.6.6-3ubuntu4) but 2:3.6.6-3ubuntu5 is to be installed Recommends: tdb-tools but it is not going to be installed E: Unable to correct problems, you have held broken packages. crm@crm-HP-G62-Notebook-PC:~$

    Read the article

  • What would you change about C# if you could?

    - by Casebash
    Assume that your objectives are the same as for C#. I don't want an answer like "I'd write Python instead". I'm only interested in changes in the language itself - not in the standard library. Perhaps this would be interesting to discuss, but as it is shared between many languages it would be better to discuss this in another question instead. Please post a single idea per answer; if you want to post several ideas, post several separate answers. Thanks!

    Read the article

  • Subdomain redirects to different server, maintaining original URL

    - by Jason Baker
    I just took over the webmaster position in my organization, and I'm trying to set up a development environment separate from the production environment. The two environments are hosted with different companies, both on shared hosting plans. Right now, production is at domain.com and development is at domain2.com What I want is to direct my development team to dev.domain.com More importantly, I don't want the URL to change from dev.domain.com back to domain2.com, but I also be responsive to page changes. For example, if my dev team navigates from dev.domain.com to a page (called "page"), I'd like the URL to show as dev.domain.com/page Is this possible, or am I just dreaming?

    Read the article

  • wild card redirects issue giving error this webpage has a redirect loop

    - by kath
    In my website I changed or better word modified the directory name ""vehicles-cars"" to ""vehicles-cars-for-sale"" when i tryed to redirect using wild card redirect my old directory name to new directory name in my web hosting cpanel account. every time when i open pages from that directory i am getting error code this webpage has a redirect loop the website is php the problem is that that my lots of pages from old directory are indexed in googles and they are getting duplicate contents i really need some advice what to do with this problem here is .htaccess file code for redirect thanks RewriteEngine on RewriteCond %{HTTP_HOST} ^adsbuz\.com$ [OR] RewriteCond %{HTTP_HOST} ^www\.adsbuz\.com$ RewriteRule ^vehicles\-cars\/?(.*)$ "http\:\/\/adsbuz\.com\/vehicles\-cars\-for\-sale\/$1" [R=301,L]

    Read the article

  • Setting up Developers Conference

    - by Darknight
    In our local city in the UK, there are as far as I am aware no developer conferences. I am confident that our region has many professional developers as well as many graduating students whom who really benefit from a conference. I would like to ask the following questions: What steps or advice would one take if the task was given to set up a local developers conference? What would the costing look like? (excluding building/hosting of website(s)) How would one build interest and promote this? How would I approach, Local Companies & Universities to collaborate with them? I'm not just aiming this question to users who may have experience in setting up such conferences (but are highly welcome). Rather how would you attack this if you was tasked with this?

    Read the article

  • Page Load Time - "Waiting on..." taking ages. What part of page request process is hung?

    - by James
    I have a new cluster site running on Magento that's on a development server that is made up of 2 x web servers and 1 x database server. I have optimized the site in all areas I know (gzip, increasing php memory limits, increasing database memory limits etc) but sometimes the page loading gets stuck on 'waiting for xxx.xx.xx.xxx' (Chrome and other broswers, chrome just shows it that way). It can sit there for 40 + seconds, sometimes it just never loads and I close it in frustration. What part of the page loading process is this hung at? Is it a server issue, database issue, platform issue? I need to know where to start or whether to push the hosting provider about it.

    Read the article

  • Where would a spam bot be located?

    - by Tim
    I have a hosted website using a free hosting service, I received an email this afternoon saying that I have been suspended because my account has been compromised. Basically, someone is using my email account to mass send spam. I've changed all the passwords and everything but when my Gmail pulls the emails from the host it's still downloading loads of spam messages that show like this: This message was created automatically by mail delivery software. A message that you sent could not be delivered to one or more of its recipients. This is a permanent error. The following address(es) failed: [email protected] SMTP error from remote mail server after end of data: host 198.91.80.251 [198.91.80.251]: 554 5.6.0 id=23634-03 - Rejected by MTA on relaying, from MTA([127.0.0.1]:10030): 554 Error: This email address has lost rights to send email from the system ------ This is a copy of the message, including all the headers. ------ Return-path: <[email protected]> Received: from keenesystems.com ([66.135.33.211]:2370 helo=server211) by absolut.x10hosting.com with esmtpsa (TLSv1:RC4-MD5:128) (Exim 4.77) (envelope-from <[email protected]>) id 1TGwSW-002hHe-Lc for [email protected]; Wed, 26 Sep 2012 13:35:44 -0500 MIME-Version: 1.0 Date: Wed, 26 Sep 2012 13:35:43 -0500 X-Priority: 3 (Normal) X-Mailer: Ximian Evolution 3.9.9 (8.5.3-6) Subject: New staff members wanted at Auction It Online From: [email protected] Reply-To: [email protected] To: "Nadia Monti" <[email protected]> Content-Type: text/plain Content-Transfer-Encoding: quoted-printable Message-ID: <OUTLOOK-IDM-9aed7054-6a3e-e1a4-1d5c-3e73377652a6@server211> Date : 26 September 2012=0ATime : 13:35=0ASender : Dennise Halcomb Head = Office Manager of RJ Auction Drop-Off Int.=0A=0ANice to meet you Nadia M= onti=0A=0ARJ ADO Ltd., a USA based company, offers a significant amount = of goods worldwide for our customers on eBay and other auction venues. = Our company's main target is to provide a suitable and cost-effective se= rvice for any person, company or fundraising company. The main purpose o= f the administrative assistant / sales support representative is to cont= ribute to the sales force and add convenience to our cost-effective serv= ice dedicated to individuals, businesses, and organizations worldwide. O= ur HR department obtained your resume from one of the various job-orient= ed websites just to offer you this post.=0A=0AWorking Schedule: This is = a part time and home-based offer. You won't need to spend more than 3 ho= urs each day. Your =0Aschedule will be flexible.=0A=0ASalary: At the end= of the trial period (it lasts for 1 month) you will be paid 1,800 EUR. = With the average volume of clients your overall income will raise up to = 3,000 EUR per month. After the trial period is over your base salary wil= l grow up to 2,500 EUR per month, so you will earn 5% commission from th= e transactions completed.=0A=0AWhere?: Italy Wide. As it is a stay at ho= me position all the communication will be carried out via email and via = phone.=0A=0ARequirements: Access to the internet during the workday and = basic microsoft office skills are needed. Basic knowledge of English is = required (most of the contacts will be in English).=0A=0ACosts and Fees:= There are NO costs at any time for our employees. All fees related to t= his position are covered by the RJ ADO Co. Ltd..=0A=0AFurther Hiring Pro= cess: If you are interested in position we offer, please reply to this e= mail and send us the copy of your resume for verification.=0A=0AAfter re= viewing all of the received applications we will reply to successful app= licants only. Then we'll offer to these successful applicants a position= within our firm on a trial period basis for one month beginning from th= e date you sign a trial agreement. During this trial period you will rec= eive full guidance and support. Employees on a one monthly trial period = are evaluated at least one week prior to the end of their trial. During = the trial, your supervisor can recommend termination. At the end of the = trial period, the supervisor can offer continued employment, extension o= f trial period, or termination. After the trial period you may ask for m= ore hours or continue full-time.=0A=0AIf you are interested in this posi= tion, just reply to this email and send any questions you have and the c= opy of your resume for verification.=0A=0AThank You,=0AHR-Manager of RJ = ADO Co. Ltd.=0A=0APermission Settings=0AYou have been referred to RJ Auc= tion Drop-Off If you feel you received this email in error or do not wis= h to receive future messages, please reply to this message with "remove"= in the subject field. We will immediately update our database according= ly. =0AWe apologize for any inconvenience caused.=0A=0ARJ Auction Drop-O= ff Co. Ltd. I'm not aware of how this has happened. I'm not sure how anyone could have got hold of my password. It's a simple wordpress install, at some point recently my host went down and there was a fresh install of wordpress with default admin accounts, I have a feeling it could be something to do with this. My question is, even though I've changed all my passwords it's all still happening, is there annywhere in paticular this script would be stored on my host. I really can't deal with having my hosting account suspended and my email account sending all this spam.

    Read the article

  • Library and several small programs that use it: how should I structure my git repository?

    - by Dan
    I have some code that uses a library that I and others frequently modify (usually only by adding functions and methods). We each keep a local fork of the library for our own use. I also have a lot of small "driver" programs (~100 lines) that use the library and are used exclusively by me. Currently, I have both the driver programs and the library in the same repository, because I frequently make changes to both that are logically connected (adding a function to the library and then calling it). I'd like to merge my fork of the library with my co-workers' forks, but I don't want the driver programs to be part of the merged library. What's the best way to organize the git repositories for a large, shared library that needs to be merged frequently and a number of small programs that have changes that are connected to changes in the library?

    Read the article

  • How to Implement Single Sign-On between Websites

    - by hmloo
    Introduction Single sign-on (SSO) is a way to control access to multiple related but independent systems, a user only needs to log in once and gains access to all other systems. a lot of commercial systems that provide Single sign-on solution and you can also choose some open source solutions like Opensso, CAS etc. both of them use centralized authentication and provide more robust authentication mechanism, but if each system has its own authentication mechanism, how do we provide a seamless transition between them. Here I will show you the case. How it Works The method we’ll use is based on a secret key shared between the sites. Origin site has a method to build up a hashed authentication token with some other parameters and redirect the user to the target site. variables Status Description ssoEncode required hash(ssoSharedSecret + , + ssoTime + , + ssoUserName) ssoTime required timestamp with format YYYYMMDDHHMMSS used to prevent playback attacks ssoUserName required unique username; required when a user is logged in Note : The variables will be sent via POST for security reasons Building a Single Sign-On Solution Origin Site has function to 1. Create the URL for your Request. 2. Generate required authentication parameters 3. Redirect to target site. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { string postbackUrl = "http://www.targetsite.com/sso.aspx"; string ssoTime = DateTime.Now.ToString("yyyyMMddHHmmss"); string ssoUserName = User.Identity.Name; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string ssoHash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); string value = string.Format("{0}:{1},{2}", ssoHash,ssoTime, ssoUserName); Response.Clear(); StringBuilder sb = new StringBuilder(); sb.Append("<html>"); sb.AppendFormat(@"<body onload='document.forms[""form""].submit()'>"); sb.AppendFormat("<form name='form' action='{0}' method='post'>", postbackUrl); sb.AppendFormat("<input type='hidden' name='t' value='{0}'>", value); sb.Append("</form>"); sb.Append("</body>"); sb.Append("</html>"); Response.Write(sb.ToString()); Response.End(); } } Target Site has function to 1. Get authentication parameters. 2. Validate the parameters with shared secret. 3. If the user is valid, then do authenticate and redirect to target page. 4. If the user is invalid, then show errors and return. using System; using System.Web.Security; using System.Text; public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { if (User.Identity.IsAuthenticated) { Response.Redirect("~/Default.aspx"); } } if (Request.Params.Get("t") != null) { string ticket = Request.Params.Get("t"); char[] delimiters = new char[] { ':', ',' }; string[] ssoVariable = ticket.Split(delimiters, StringSplitOptions.None); string ssoHash = ssoVariable[0]; string ssoTime = ssoVariable[1]; string ssoUserName = ssoVariable[2]; DateTime appTime = DateTime.MinValue; int offsetTime = 60; // get this from config or similar try { appTime = DateTime.ParseExact(ssoTime, "yyyyMMddHHmmss", null); } catch { //show error return; } if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } bool isValid = false; string ssoSharedSecret = "58ag;ai76"; // get this from config or similar string hash = FormsAuthentication.HashPasswordForStoringInConfigFile(string.Format("{0},{1},{2}", ssoSharedSecret, ssoTime, ssoUserName), "md5"); if (string.Compare(ssoHash, hash, true) == 0) { if (Math.Abs(appTime.Subtract(DateTime.Now).TotalSeconds) > offsetTime) { //show error return; } else { isValid = true; } } if (isValid) { //Do authenticate; } else { //show error return; } } else { //show error } } } Summary This is a very simple and basic SSO solution, and its main advantage is its simplicity, only needs to add a single page to do SSO authentication, do not need to modify the existing system infrastructure.

    Read the article

  • Upgrade from 10.10 to 11.04

    - by hemanta pathak
    On doing an upgrade from 10.10. to 11.04 using Upgrade Manager everything works fine. But on installing a software that bundle its owns runtime environment( loader and system files) and installs the custom runtime ( basically loader) in the location where the native one resides, the above mentioned upgrade fails.(Upgrade starts and after sometime it encounters an error and aborts.) Basically , /usr/bin/dpkg throws up an error on being unable to locate a system shared library in the aforesaid third party runtime folder ( /usr/bin/dpkg should not search the third party runtime folder.Instead it should look at the system default folder) But if we remove the installed third party loader from the default system location /lib and place it in some other location , the upgrade problem goes away. This makes me believe /usr/bin/dpkg invokes(loads) the wrong loader and as such goes looking for the dependent libraries in the third party folder. Can someone take a look at this ? is there some bug with dpkg

    Read the article

  • Where are the systray icons for Dropbox in Ubuntu desktop 13.04 (minimal)?

    - by samvv
    I reinstalled Ubuntu desktop using the minimal CD image and the following command: $ sudo apt-get install ubuntu-desktop --no-install-recommends After that I used Ubuntu Software Center to make sure Unity supports application indicators: http://i.imgur.com/bYF162w.png. Everything works great, except for Dropbox. For some reason the icon doesn't appear in the tray, even though the application is running. Steam on the other hand runs just fine, so it seems like there is nothing wrong with the tray itself. According to this post the tray icons should be in /usr/shared/icons/hicolor/22x22/status but it doesn't contain any Dropbox icons. Neither do any of the other resolutions. The answer is a bit outdated, so I'm not entirely sure it is still applicable to the current version of Dropbox. I did the usual thing of reinstalling dropbox with: sudo apt-get purge nautilus-dropbox sudo apt-get install nautilus-dropbox But that didn't solve anything either. Does somebody know how to fix this issue?

    Read the article

  • Terminator Skull Crafted from Dollar Store Parts [Video]

    - by Jason Fitzpatrick
    Earlier this year we shared an Iron Man prop build made from Dollar Store parts. The same Dollar Store tinker is at it again, this time building a Terminator endoskull. James Bruton has a sort of mad tinker knack for finding odds and ends at the Dollar Store and mashing them together into novel creations. In the video below, he shows how he took a pile of random junk from the store (plastic bowls, cheap computer speakers, even the packaging the junk came in) and turned it into a surprisingly polished Terminator skull. Hit up the link below for the build in photo-tutorial format. Dollar Store Terminator Endoskull Build [via Make] How to Banish Duplicate Photos with VisiPic How to Make Your Laptop Choose a Wired Connection Instead of Wireless HTG Explains: What Is Two-Factor Authentication and Should I Be Using It?

    Read the article

  • Extremely slow transfer speed ubuntu -> Windows

    - by Hailwood
    I have two laptops, One is running Ubuntu 12.04 (EXT4) the other is running Windows 7 (NTFS). I am copying over 40gb of data (one file) from the Ubuntu laptop to the Windows Laptop. (Browse the shared folder on Ubuntu using Windows copy/paste) But I am getting transfer speeds topping out at ~700kb/s Surely this is not right. I am transferring via wifi on both laptops. My download speeds can reach 7-8mb/s on both laptops, so I know it is not the wifi cards or the router topping out. wlan0 Link encap:Ethernet HWaddr 84:4b:f5:db:b4:85 inet addr:192.168.1.66 Bcast:192.168.1.255 Mask:255.255.255.0 inet6 addr: fe80::864b:f5ff:fedb:b485/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:11941185 errors:0 dropped:0 overruns:0 frame:0 TX packets:11306693 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:10087111370 (10.0 GB) TX bytes:7843524888 (7.8 GB)

    Read the article

  • Using Definition of Done to Drive Agile Maturity

    - by Dylan Smith
    I’ve been an Agile Coach at a lot of different clients over the years, and I want to share an approach I use to help them adopt and mature over time. It’s important to realize that “Agile” is not a black/white yes/no thing. Teams can be varying degrees of agile. I think of this as their agile maturity level. When I coach teams I want them to start out being a little agile, and get more agile as they mature. The approach I teach them is to use the definition of done as a technique to continuously improve their agile maturity over time. We’re probably all familiar with the concept of “Done Done” that represents what *actually* being done a feature means. Not just when a developer says he’s done right after he writes that last line of code that makes the feature kind-of work. Done Done means the coding is done, it’s been tested, installers and deployment packages have been created, user manuals have been updated, architecture docs have been updated, etc. To enable teams to internalize the concept of “Done Done”, they usually get together and come up with their Definition of Done (DoD) that defines all the activities that need to be completed before a feature is considered Done Done. The Done Done technique typically is applied only to features (aka User Stories). What I do is extend this to apply to several concepts such as User Stories, Sprints, Releases (and sometimes Check-Ins). During project kick-off I’ll usually sit down with the team and go through an exercise of creating DoD’s for each of these concepts (Stories/Sprints/Releases). We’ll usually start by just brainstorming a bunch of activities that could end up in these various DoD’s. Here’s some examples: Code Reviews StyleCop FxCop User Manuals Updated Architecture Docs Updated Tested by QA Tested by UAT Installers Created Support Knowledge Base Updated Deployment Instructions (for Ops) written Automated Unit Tests Run Automated Integration Tests Run Then we start by arranging these activities into the place they occur today (e.g. Do you do UAT testing only once per release? every sprint? every feature?). If the team was previously Waterfall most of these activities probably end up in the Release DoD. An extremely mature agile team would probably have most of these activities in the DoD for the User Stories (because an extremely mature agile team will probably do continuous deployment and release every story). So what we need to do as a team, is work to move these activities from their current home (Release DoD) down into the Sprint DoD and eventually into the User Story DoD (and maybe into the lower-level Check-In DoD if we decide to use that). We don’t have to move them all down to User Story immediately, but as a team we figure out what we think we’re capable of moving down to the Sprint cycle, and Story cycle immediately, and that becomes our starting DoD’s. Over time the team makes an effort to continue moving activities down from Release->Sprint->Story as they become more agile and more mature. I try to encourage them to envision a world in which they deploy to production as each User Story is completed. They would need to be updating User Manuals, creating installers, doing UAT testing (typical Release cycle activities) on every single User Story. They may never actually reach that point, but they should envision that, and strive to keep driving the activities down closer to the User Story cycle s they mature. This is a great technique to give a team an easy-to-follow roadmap to mature their agile practices over time. Sure there’s other aspects to maturity outside of this, but it’s a great technique, that’s easy to visualize, to drive agility into the team. Just keep moving those activities (aka “gates”) down the board from Release->Sprint->Story. I’ll try to give an example of what a recent client of mine had for their DoD’s (this is from memory, so probably not 100% accurate): Release Create/Update deployment Instructions For Ops Instructional Videos Updated Run manual regression test suite UAT Testing In this case that meant deploying to an environment shared across the enterprise that mirrored production and asking other business groups to test their own apps to ensure we didn’t break anything outside our system Sprint Deploy to UAT Environment But not necessarily actually request UAT testing occur User Guides updated Sprint Features Video Created In this case we decided to create a video each sprint showing off the progress (video version of Sprint Demo) User Story Manual Test scripts developed and run Tested by BA Deployed in shared QA environment Using automated deployment process Peer Code Review Code Check-In Compiled (warning-free) Passes StyleCop Passes FxCop Create installer packages Run Automated Tests Run Automated Integration Tests PS – One of my clients had a great question when we went through this activity. They said that if a Sprint is by definition done when the end-date rolls around (time-boxed), isn’t a DoD on a sprint meaningless – it’s done on the end-date regardless of whether those other activities are complete or not? My answer is that while that statement is true – the sprint is done regardless when the end date rolls around – if the DoD activities haven’t been completed I would consider the Sprint a failure (similar to not completing what was committed/planned – failure may be too strong a word but you get the idea). In the Retrospective that will become an agenda item to discuss and understand why we weren’t able to complete the activities we agreed would need to be completed each Sprint.

    Read the article

  • Opinion for my recruitment portal idea [closed]

    - by user1498503
    I am creating a recruitment portal for IT professionals. In this, recruiters while creating a job post would be asked to create a skills requirement matrix. Essential Skills : asp.net MVC Entity Framework Desired Skills : SQL Server 2008 IIS 7.0 On the other hand job seekers would also have their own skills matrix Jobseeker #1 Core Skills : asp.net MVC Entity Framework MangoDB Secondary Skills : SQL Server 2008 IIS 7.0 Jobseeker #2 Core Skills : asp.net Web forms Secondary Skills : SQL Server 2008 IIS 7.0 So when both job seekers apply for the same job. Would it be a good idea for both of them to see each other's skills matrix for comparison?Also no personal details and CVs are shared. I think comparisons would help job seekers to understand what their areas of improvement are and could motivate to fill the skills gap. Your opinion would be appreciated. Regards

    Read the article

  • MVC .Net, WebMatrix talk presentations and webinars

    - by subodhnpushpak
    I presented sessions on MVC .Net and webmatrix. I covered stuff like what’s new in MVC .net and the architecture goodness of MVC pattern. I also demonstrated how MVC 3 / MVC 4 harness HTML 5 / mobile along with Jquery and Modernizr.  PHP coding using MVC and Webmatrix and other advanced stuff like hosting PHP on windows or porting MYSQL Db to MSSQL is also is also part of the demo in the sessions. The slide decks are available at below link and all the demo is recorded and also shared at below link.   WebMatrix View more presentations from Subodh Pushpak.   WebMatrix2 View more presentations from Subodh Pushpak.   The recordings / Demo can be accessed at and If you have any suggestions / ideas / comments; please do post.

    Read the article

  • The danger of changing the domain of your portfolio

    - by Mervin
    So I have a online portfolio that is available at mervin-ux-portfolio.com but I am planning to change hosts since the current host I am hosting it with is hitting me with a very high yearly renewal rate. When I was inquiring about domain transfers ,,they told me that since I had not initiated the domain transfer within 14 days of the expiry of the domain ,they cannot do it immediately and it would take about two weeks to to release the domain name. Since I dont like the idea of my site being down for like 2 weeks ,I was wondering if I should start afresh with a new domain on a new host and what were the potential dangers of that ( I have the entire site backup,so creating a replica of the site on the new host wont be hard) I also wont be losing any business or work since I work full time currently but I was just wondering about the challenges in terms of getting my domain name back to the top of search results and basically getting it out there assuming I go the new domain name approach. I know this is strictly not an UX question but I was hoping people could give some suggestions on what I should do

    Read the article

  • How to set Qwt path to the run-time linker in Xubuntu

    - by Rahul
    I've successfully installed Qwt in Xubuntu 12.04(qmake - make - make install). But now I need to set the Qwt path to run time linker of Xubuntu. In manual it's given like - If you have installed a shared library it's path has to be known to the run-time linker of your operating system. On Linux systems read "man ldconfig" ( or google for it ). Another option is to use the LD_LIBRARY_PATH (on some systems LIBPATH is used instead, on MacOSX it is called DYLD_LIBRARY_PATH) environment variable. But being newbie to Linux environment, I'm not able to proceed further. Please help me with this.

    Read the article

  • Track url from Amazon S3 using Google Analytics

    - by morktron
    I couldn't find any decent pay per view video solutions for low budget clients. So I'm considering using a membership extension with Joomla and hosting the video with amazon S3. The only issue is that once someone has signed up to view or download the video if they have any web development experience they will be able to get the url of the video and freely publish it on the web. How can this be prevented? It looks like it can be done using IAM User Temporary Credentials - AWS SDK for PHP but the client would prefer not to have to pay someone to spend hours writing custom php code to get this to work. With Amazon s3 I could at least check the log files I guess to manually monitor the url but is there a way to track the url with Google Analytics? or is there a more elegant solution?

    Read the article

  • what do I need to do to get started with a website? [closed]

    - by omar
    I am a student and I have made websites for some companies before but now I would like to make a generic website for myself but dont know how to get started as I never had to deal with hosting or bandwidth before. I am looking to make a website that will provide users with information about me. In the future I might add things such as ordering or buying products but for now the idea is to provide information only. I was told not to go for any webhoster outside of Canada as I might risk the confidential integrity of my users or myself. I have no idea where to get started or what I need. I now also have to deal with possible security issues or security holes I may leave in my website creation... So my question here is: What is a good and reliable webhosting company that can be trusted to some degree?

    Read the article

  • JPRT: A Build & Test System

    - by kto
    DRAFT A while back I did a little blogging on a system called JPRT, the hardware used and a summary on my java.net weblog. This is an update on the JPRT system. JPRT ("JDK Putback Reliablity Testing", but ignore what the letters stand for, I change what they mean every day, just to annoy people :\^) is a build and test system for the JDK, or any source base that has been configured for JPRT. As I mentioned in the above blog, JPRT is a major modification to a system called PRT that the HotSpot VM development team has been using for many years, very successfully I might add. Keeping the source base always buildable and reliable is the first step in the 12 steps of dealing with your product quality... or was the 12 steps from Alcoholics Anonymous... oh well, anyway, it's the first of many steps. ;\^) Internally when we make changes to any part of the JDK, there are certain procedures we are required to perform prior to any putback or commit of the changes. The procedures often vary from team to team, depending on many factors, such as whether native code is changed, or if the change could impact other areas of the JDK. But a common requirement is a verification that the source base with the changes (and merged with the very latest source base) will build on many of not all 8 platforms, and a full 'from scratch' build, not an incremental build, which can hide full build problems. The testing needed varies, depending on what has been changed. Anyone that was worked on a project where multiple engineers or groups are submitting changes to a shared source base knows how disruptive a 'bad commit' can be on everyone. How many times have you heard: "So And So made a bunch of changes and now I can't build!". But multiply the number of platforms by 8, and make all the platforms old and antiquated OS versions with bizarre system setup requirements and you have a pretty complicated situation (see http://download.java.net/jdk6/docs/build/README-builds.html). We don't tolerate bad commits, but our enforcement is somewhat lacking, usually it's an 'after the fact' correction. Luckily the Source Code Management system we use (another antique called TeamWare) allows for a tree of repositories and 'bad commits' are usually isolated to a small team. Punishment to date has been pretty drastic, the Queen of Hearts in 'Alice in Wonderland' said 'Off With Their Heads', well trust me, you don't want to be the engineer doing a 'bad commit' to the JDK. With JPRT, hopefully this will become a thing of the past, not that we have had many 'bad commits' to the master source base, in general the teams doing the integrations know how important their jobs are and they rarely make 'bad commits'. So for these JDK integrators, maybe what JPRT does is keep them from chewing their finger nails at night. ;\^) Over the years each of the teams have accumulated sets of machines they use for building, or they use some of the shared machines available to all of us. But the hunt for build machines is just part of the job, or has been. And although the issues with consistency of the build machines hasn't been a horrible problem, often you never know if the Solaris build machine you are using has all the right patches, or if the Linux machine has the right service pack, or if the Windows machine has it's latest updates. Hopefully the JPRT system can solve this problem. When we ship the binary JDK bits, it is SO very important that the build machines are correct, and we know how difficult it is to get them setup. Sure, if you need to debug a JDK problem that only shows up on Windows XP or Solaris 9, you'll still need to hunt down a machine, but not as a regular everyday occurance. I'm a big fan of a regular nightly build and test system, constantly verifying that a source base builds and tests out. There are many examples of automated build/tests, some that trigger on any change to the source base, some that just run every night. Some provide a protection gateway to the 'golden' source base which only gets changes that the nightly process has verified are good. The JPRT (and PRT) system is meant to guard the source base before anything is sent to it, guarding all source bases from the evil developer, well maybe 'evil' isn't the right word, I haven't met many 'evil' developers, more like 'error prone' developers. ;\^) Humm, come to think about it, I may be one from time to time. :\^{ But the point is that by spreading the build up over a set of machines, and getting the turnaround down to under an hour, it becomes realistic to completely build on all platforms and test it, on every putback. We have the technology, we can build and rebuild and rebuild, and it will be better than it was before, ha ha... Anybody remember the Six Million Dollar Man? Man, I gotta get out more often.. Anyway, now the nightly build and test can become a 'fetch the latest JPRT build bits' and start extensive testing (the testing not done by JPRT, or the platforms not tested by JPRT). Is it Open Source? No, not yet. Would you like to be? Let me know. Or is it more important that you have the ability to use such a system for JDK changes? So enough blabbering on about this JPRT system, tell me what you think. And let me know if you want to hear more about it or not. Stay tuned for the next episode, same Bloody Bat time, same Bloody Bat channel. ;\^) -kto

    Read the article

  • AngularJS: structuring a web application with multiple ng-apps

    - by mg1075
    The blogosphere has a number of articles on the topic of AngularJS app structuring guidelines such as these (and others): http://www.johnpapa.net/angular-app-structuring-guidelines/ http://codingsmackdown.tv/blog/2013/04/19/angularjs-modules-for-great-justice/ http://danorlando.com/angularjs-architecture-understanding-modules/ http://henriquat.re/modularizing-angularjs/modularizing-angular-applications/modularizing-angular-applications.html However, one scenario I have yet to come across for guidelines and best practices is the case where you have a large web application containing multiple "mini-spa" apps, and the mini-spa apps all share a certain amount of code. I am not referring to the case of trying to have multiple ng-app declarations on the same page; rather, I mean different sections of a large site that have their own, unique ng-app declaration. As Scott Allen writes in his OdeToCode blog: One scenario I haven't found addressed very well is the scenario where multiple apps exist in the same greater web application and require some shared code on the client. Are there any recommended approaches to take, pitfalls to avoid, or good sample structures of this scenario that you can point to?

    Read the article

  • What could be a reason for cross-platform server applications developer to make his app work in multiple processes?

    - by Kabumbus
    So we consider a server app development - heavily loaded with messing with big data streams.An app will be running on one powerful server. a server app shall be developed in form of crossplatform application - so to work on Windows, Mac OS X and Linux. So same code many platforms for standing alone server architecture. We wonder what benefits does distributing applications not only over threads but over processes as wall would bring to programmers and to server end users and why? Some people sad to me that even having 48 cores, 4 process threads would be shared via OS throe all cores... is it true BTW?

    Read the article

  • is it possible for windows viruses when downloaded through ubuntu affect my windows os

    - by fr33c0untry
    I know that Ubuntu is immune to virus so there is no question of it getting infected while browsing the net.however i frequently transfer files from my pendrive (which i get from other virus infested computers) to my own laptop and save it on the data drive which is shared by both windows and ubuntu.i would like to know if there is a chance for windows viruses which might get saved and then infect it whenever i switch to windows later on.its ironic that i scan my pendrive using avast on windows and then save all my files to my hard drive to keep my laptop free from virus eventhough i have ubuntu.can anyone suggest an alternative.thanks in advance.

    Read the article

< Previous Page | 198 199 200 201 202 203 204 205 206 207 208 209  | Next Page >