Search Results

Search found 28186 results on 1128 pages for 'site master'.

Page 150/1128 | < Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >

  • Is there a way to make IETab always open a site in Internet Explorer?

    - by reg
    Hello, IETab offers a setting that lets you define sites that should always be opened using IE's rendering engine. This is great and I use it all the time. There are a few sites, though, with which I encounter problems when I access them this way (needless to say that these have some IE-only features, otherwise I would simply use Firefox without IETab). So my question is: is there a way to make IETab (or any other extension that does the same thing) automatically open a specific list of sites in IE? To be clear: I want a way to specify a list of sites that will open an external IE process from Firefox, when clicking on those links or bookmarks from within Firefox.

    Read the article

  • How should I interpret site analytics with 11 pageviews in an 3 second visit?

    - by Juank
    I'm using google analytics and recently i've noticed some weird trends going on. I have a lot of visits that last mere seconds but mark several page views... more than a normal human can see in that range of time. A specific case is that the only visitor from Ireland i've had until now recorded 11 pageviews in a 3 second visit. Are these crawlers? Shouldn't google analytics filter those out?

    Read the article

  • If I let Google handle my emails for my domain, my Wordpress site won't send out emails anymore

    - by Fulvio
    Since I decided to let Google handle all my emails for my domain, while the domain is hosted on a 3rd party server, emails send out by a Wordpress installation no longer work. My supposition is that since all email is being routed to Google, my specific account on that server for that domain is unable to send out emails. I definitely wish to keep using google services for handling my emails since it comes with all the advantages connected to a Google account. However I need my Wordpress installation to send out administrative emails. I run my server with CPanel. How to configure that specific account and/or Wordpress to keep it able to send out emails? I don't need people to answer these emails sent out from server (eventually I might set a reply-to-address perhaps) thanks

    Read the article

  • Something keeps changing the default permissions of my settings.php file for a web site on Linux

    - by JrSysAdmin
    I keep changing the file permissions for the file /var/www/html/websitename/settings.php to 775 and within 15 minutes or so it automatically changes back 555. The owner of the file is "apache" and the group ownership is for our Linux Developers just like all of the other files which are not having this issue. Obviously there must be some sort of process running that is automatically changing the file permissions (apache maybe?) but I haven't been able to figure out. Any help would be greatly appreciated.

    Read the article

  • Multiple SSL certificates on Apache using multiple public IPs - not working

    - by St. Even
    I need configure multiple SSL certificates on a single Apache server. I already know that I need multiple external IP addresses as I cannot use SNI (only running Apache 2.2.3 on this server). I assumed that I had everything configured correctly, unfortunately things are not working as they should (or maybe I should say, as I expected them to work)... In my httpd.conf I have: NameVirtualHost *:80 NameVirtualHost *:443 Lets say my public IP is 12.0.0.1 and my private IP is 192.168.0.1. When I use the public IP in my vhost my default website is being shown instead the one defined in my vhost, e.g.: <VirtualHost 12.0.0.1:443> ServerAdmin [email protected] ServerName blablabla.site.com DocumentRoot /data/sites/blablabla.site.com ErrorLog /data/sites/blablabla.site.com-error.log #CustomLog /data/sites/blablabla.site.com-access.log common SSLEngine On SSLCertificateFile /etc/httpd/conf/ssl/blablabla.site.com.crt SSLCertificateKeyFile /etc/httpd/conf/ssl/blablabla.site.com.key SSLCertificateChainFile /etc/httpd/conf/ssl/blablabla.site.com.ca-bundle <Location /> SSLRequireSSL On SSLVerifyDepth 1 SSLOptions +StdEnvVars +StrictRequire </Location> </VirtualHost> When I use the private IP in my vhost everything works as it should (the website defined in my vhost is being shown), e.g.: <VirtualHost 192.168.0.1:443> ...same as above... </VirtualHost> My server is listening on all interfaces: [root@grbictwebp02 httpd]# netstat -tulpn | grep :443 tcp 0 0 0.0.0.0:443 0.0.0.0:* LISTEN 5585/httpd What am I doing wrong? If I cannot get this to work I cannot continue to add the second SSL certificate on the other public IP... If more information is required just let me know!

    Read the article

  • Force www. on multi domain site and retain http or https [closed]

    - by John Isaacks
    I am using CakePHP which already contains an .htaccess file that looks like: <IfModule mod_rewrite.c> RewriteEngine on RewriteRule ^$ app/webroot/ [L] RewriteRule (.*) app/webroot/$1 [L] </IfModule> I want to force www. (unless it is a subdomain) to avoid duplicate content penalties. It needs to retain http or https Also This application will have multiple domains pointing to it. So the code needs to be able to work with any domain.

    Read the article

  • Subdomain on a separate server (Windows/IIS) won't load default web site?

    - by DOTang
    I've got a domain hosted under godaddy.com. I set it up so that subdomain.mysite.com points to a different server which uses Windows/IIS (say for example ip 1.1.1.1). When I go to the subdomain I get no errors, but no webpage loads. In IIS under the default website I added a bindings for subdomain.mysite.com with the ip of 1.1.1.1, but still nothing loads, just a totally blank page. I know for sure the subdomain host is working correctly because when I ping my subdomain, the correct IP shows. What am I missing to get this working?

    Read the article

  • How to block a specific site's directory in windows?

    - by Creedy
    How can I block a specific directory of a website as e.g. http://example.com/someSite unfortunately the hosts file is not an option since you can only block whole domains there and any "/" just destroys these rules. This is just for my personal protection against visiting some sites too often, while i still have to be able to get to the other sites of that domain (as e.g. example.com/someOtherSite) Would be great if someone knows a solution regarding this topic.

    Read the article

  • How much traffic per month would my site need? [closed]

    - by camran
    HOW MANY GB OF TRAFFIC IS REQUIRED PER MONTH? Any way to calculate this? Webhosting companies have a limit when ordering, so I need to know... A classifieds website using PHP and MYSQL. MYSQL has around 500thousand records. Not much graphics. Pretty advanced search feautures. How much traffic would I need do you think? Thanks

    Read the article

  • How many site visitors can my server handle? [closed]

    - by coolboycsaba
    I have a website, and I want to host it on my own computer, but I'm wondering if it's good enough. The website checks if the user is logged in and then displays 15 items (title, description) from a mysql database and the rating (stored in another database) and the comments (another database) for each item. It also displays some stats (number of items, comments). I also have an image for each item. My specs are: AMD Athlon 64 X2 Dual Core Processor 5600+ 2.90 GHz RAM: 4.00GB Windows 7 64bit So what do you think, how much visitors and items could it handle (at once or daily) ? My internet connection is good, around 7-10 mb upload and same download speed

    Read the article

  • Some help needed with setting up the PERFECT workflow for web development with 2-3 guys using subver

    - by Roeland
    Hey guys! I run a small web development company along side with my brother and friend. After doing extensive research I have decided on using subversion for version control. Here is how I currently plan on running typical development. Keep in mind there are 3 of us each in a separate location. I set up an account with springloops (springloops.com) subversion hosting. Each time I work on a new project, I create a repository for it. So lets say in this case I am working on site1. I want to have 3 versions of the site on the internet: Web Development - This is the server me and the other developers publish to. (site1.dev.bythepixel.com) Client Preview - This is the server that we update every few days with a good revision for the client to see. (site1.bythepixel.com) Live Site - The site I publish to when going live (site1.com) Each web development machine (at each location) will have a local copy of xamp running virtual host to allow multiple websites to be worked on. The root of the local copy is set up to be the same as the local copy of the subversion repository. This is set up so we can make small tweaks and preview them immediately. When some work has been done, a commit is made to the repository for the site. I will have the dev site automatically be pushed (its an option in springloops). Then, whenever I feel ready to push to the client site I will do so. Now, I have a few concerns with those work flow: I am using codeigniter currently, and in the config file I generally set the root of the site. Ex. http://www.site1.com. So, it looks like each time I publish to one of the internet servers, I will have to modify the config file? Is there any way to make it so certain files are set for each server? So when I hit publish to client preview it just uploads the config file for the client preview server. I don't want the live site , the client preview site and the dev site to share the same mysql server for a variety of reasons. So does this once again mean that I have to adjust the db server info each time I push to a different site? Does this workflow make sense? If you have any suggestion please let me know. I plan for this to be the work flow I use for the next few year. I just need to put a system in place that allows for future expansion! Thanks a bunch!!

    Read the article

  • On Redirect - Failed to generate a user instance of SQL Server...

    - by Craig Russell
    Hello (this is a long post sorry), I am writing a application in ASP.NET MVC 2 and I have reached a point where I am receiving this error when I connect remotely to my Server. Failed to generate a user instance of SQL Server due to failure in retrieving the user's local application data path. Please make sure the user has a local user profile on the computer. The connection will be closed. I thought I had worked around this problem locally, as I was getting this error in debug when site was redirected to a baseUrl if a subdomain was invalid using this code: protected override void Initialize(RequestContext requestContext) { string[] host = requestContext.HttpContext.Request.Headers["Host"].Split(':'); _siteProvider.Initialise(host, LiveMeet.Properties.Settings.Default["baseUrl"].ToString()); base.Initialize(requestContext); } protected override void OnActionExecuting(ActionExecutingContext filterContext) { if (Site == null) { string[] host = filterContext.HttpContext.Request.Headers["Host"].Split(':'); string newUrl; if (host.Length == 2) newUrl = "http://sample.local:" + host[1]; else newUrl = "http://sample.local"; Response.Redirect(newUrl, true); } ViewData["Site"] = Site; base.OnActionExecuting(filterContext); } public Site Site { get { return _siteProvider.GetCurrentSite(); } } The Site object is returned from a Provider named siteProvider, this does two checks, once against a database containing a list of all available subdomains, then if that fails to find a valid subdomain, or valid domain name, searches a memory cache of reserved domains, if that doesn't hit then returns a baseUrl where all invalid domains are redirected. locally this worked when I added the true to Response.Redirect, assuming a halting of the current execution and restarting the execution on the browser redirect. What I have found in the stack trace is that the error is thrown on the second attempt to access the database. #region ISiteProvider Members public void Initialise(string[] host, string basehost) { if (host[0].Contains(basehost)) host = host[0].Split('.'); Site getSite = GetSites().WithDomain(host[0]); if (getSite == null) { sites.TryGetValue(host[0], out getSite); } _site = getSite; } public Site GetCurrentSite() { return _site; } public IQueryable<Site> GetSites() { return from p in _repository.groupDomains select new Site { Host = p.domainName, GroupGuid = (Guid)p.groupGuid, IsSubDomain = p.isSubdomain }; } #endregion The Linq query ^^^ is hit first, with a filter of WithDomain, the error isn't thrown till the WithDomain filter is attempted. In summary: The error is hit after the page is redirected, so the first iteration is executing as expected (so permissions on the database are correct, user profiles etc) shortly after the redirect when it filters the database query for the possible domain/subdomain of current redirected page, it errors out.

    Read the article

  • CTE Join query issues

    - by Lee_McIntosh
    Hi everyone, this problem has me head going round in circles at the moment and i wondering if anyone could give any pointers as to where im going wrong. Im trying to produce a SPROC that produces a dataset to be called by SSRS for graphs spanning the last 6 months. The data for example purposes uses three tables (theres more but the it wont change the issue at hand) and are as follows: tbl_ReportList: Report Site ---------------- North abc North def East bbb East ccc East ddd South poa South pob South poc South pod West xyz tbl_TicketsRaisedThisMonth: Date Site Type NoOfTickets --------------------------------------------------------- 2010-07-01 00:00:00.000 abc Support 101 2010-07-01 00:00:00.000 abc Complaint 21 2010-07-01 00:00:00.000 def Support 6 ... 2010-12-01 00:00:00.000 abc Support 93 2010-12-01 00:00:00.000 xyz Support 5 tbl_FeedBackRequests: Date Site NoOfFeedBackR ---------------------------------------------------------------- 2010-07-01 00:00:00.000 abc 101 2010-07-01 00:00:00.000 def 11 ... 2010-12-01 00:00:00.000 abc 63 2010-12-01 00:00:00.000 xyz 4 I'm using CTE's to simplify the code, which is as follows: DECLARE @ReportName VarChar(200) SET @ReportName = 'North'; WITH TicketsRaisedThisMonth AS ( SELECT [Date], Site, SUM(NoOfTickets) AS NoOfTickets FROM tbl_TicketsRaisedThisMonth WHERE [Date] >= DATEADD(mm, DATEDIFF(m,0,GETDATE())-6,0) GROUP BY [Date], Site ), FeedBackRequests AS ( SELECT [Date], Site, SUM(NoOfFeedBackR) AS NoOfFeedBackR FROM tbl_FeedBackRequests WHERE [Date] >= DATEADD(mm, DATEDIFF(m,0,GETDATE())-6,0) GROUP BY [Date], Site ), SELECT trtm.[Date] SUM(trtm.NoOfTickets) AS NoOfTickets, SUM(fbr.NoOfFeedBackR) AS NoOfFeedBackR, FROM Reports rpts LEFT OUTER JOIN TotalIncidentsDuringMonth trtm ON rpts.Site = trtm.Site LEFT OUTER JOIN LoggedComplaints fbr ON rpts.Site = fbr.Site WHERE rpts.report = @ReportName GROUP BY trtm.[Date] And the output when the sproc is pass a parameter such as 'North' to be as follows: Date NoOfTickets NoOfFeedBackR ----------------------------------------------------------------------------------- 2010-07-01 00:00:00.000 128 112 2010-08-01 00:00:00.000 <data for that month> <data for that month> 2010-09-01 00:00:00.000 <data for that month> <data for that month> 2010-10-01 00:00:00.000 <data for that month> <data for that month> 2010-11-01 00:00:00.000 <data for that month> <data for that month> 2010-12-01 00:00:00.000 122 63 The issue I'm having is that when i execute the query I'm given a repeated list of values of each month, such as 128 will repeat 6 times then another value for the next months value repeated 6 times, etc. argh!

    Read the article

  • Scrum in 5 Minutes

    - by Stephen.Walther
    The goal of this blog entry is to explain the basic concepts of Scrum in less than five minutes. You learn how Scrum can help a team of developers to successfully complete a complex software project. Product Backlog and the Product Owner Imagine that you are part of a team which needs to create a new website – for example, an e-commerce website. You have an overwhelming amount of work to do. You need to build (or possibly buy) a shopping cart, install an SSL certificate, create a product catalog, create a Facebook page, and at least a hundred other things that you have not thought of yet. According to Scrum, the first thing you should do is create a list. Place the highest priority items at the top of the list and the lower priority items lower in the list. For example, creating the shopping cart and buying the domain name might be high priority items and creating a Facebook page might be a lower priority item. In Scrum, this list is called the Product Backlog. How do you prioritize the items in the Product Backlog? Different stakeholders in the project might have different priorities. Gary, your division VP, thinks that it is crucial that the e-commerce site has a mobile app. Sally, your direct manager, thinks taking advantage of new HTML5 features is much more important. Multiple people are pulling you in different directions. According to Scrum, it is important that you always designate one person, and only one person, as the Product Owner. The Product Owner is the person who decides what items should be added to the Product Backlog and the priority of the items in the Product Backlog. The Product Owner could be the customer who is paying the bills, the project manager who is responsible for delivering the project, or a customer representative. The critical point is that the Product Owner must always be a single person and that single person has absolute authority over the Product Backlog. Sprints and the Sprint Backlog So now the developer team has a prioritized list of items and they can start work. The team starts implementing the first item in the Backlog — the shopping cart — and the team is making good progress. Unfortunately, however, half-way through the work of implementing the shopping cart, the Product Owner changes his mind. The Product Owner decides that it is much more important to create the product catalog before the shopping cart. With some frustration, the team switches their developmental efforts to focus on implementing the product catalog. However, part way through completing this work, once again the Product Owner changes his mind about the highest priority item. Getting work done when priorities are constantly shifting is frustrating for the developer team and it results in lower productivity. At the same time, however, the Product Owner needs to have absolute authority over the priority of the items which need to get done. Scrum solves this conflict with the concept of Sprints. In Scrum, a developer team works in Sprints. At the beginning of a Sprint the developers and the Product Owner agree on the items from the backlog which they will complete during the Sprint. This subset of items from the Product Backlog becomes the Sprint Backlog. During the Sprint, the Product Owner is not allowed to change the items in the Sprint Backlog. In other words, the Product Owner cannot shift priorities on the developer team during the Sprint. Different teams use Sprints of different lengths such as one month Sprints, two-week Sprints, and one week Sprints. For high-stress, time critical projects, teams typically choose shorter sprints such as one week sprints. For more mature projects, longer one month sprints might be more appropriate. A team can pick whatever Sprint length makes sense for them just as long as the team is consistent. You should pick a Sprint length and stick with it. Daily Scrum During a Sprint, the developer team needs to have meetings to coordinate their work on completing the items in the Sprint Backlog. For example, the team needs to discuss who is working on what and whether any blocking issues have been discovered. Developers hate meetings (well, sane developers hate meetings). Meetings take developers away from their work of actually implementing stuff as opposed to talking about implementing stuff. However, a developer team which never has meetings and never coordinates their work also has problems. For example, Fred might get stuck on a programming problem for days and never reach out for help even though Tom (who sits in the cubicle next to him) has already solved the very same problem. Or, both Ted and Fred might have started working on the same item from the Sprint Backlog at the same time. In Scrum, these conflicting needs – limiting meetings but enabling team coordination – are resolved with the idea of the Daily Scrum. The Daily Scrum is a meeting for coordinating the work of the developer team which happens once a day. To keep the meeting short, each developer answers only the following three questions: 1. What have you done since yesterday? 2. What do you plan to do today? 3. Any impediments in your way? During the Daily Scrum, developers are not allowed to talk about issues with their cat, do demos of their latest work, or tell heroic stories of programming problems overcome. The meeting must be kept short — typically about 15 minutes. Issues which come up during the Daily Scrum should be discussed in separate meetings which do not involve the whole developer team. Stories and Tasks Items in the Product or Sprint Backlog – such as building a shopping cart or creating a Facebook page – are often referred to as User Stories or Stories. The Stories are created by the Product Owner and should represent some business need. Unlike the Product Owner, the developer team needs to think about how a Story should be implemented. At the beginning of a Sprint, the developer team takes the Stories from the Sprint Backlog and breaks the stories into tasks. For example, the developer team might take the Create a Shopping Cart story and break it into the following tasks: · Enable users to add and remote items from shopping cart · Persist the shopping cart to database between visits · Redirect user to checkout page when Checkout button is clicked During the Daily Scrum, members of the developer team volunteer to complete the tasks required to implement the next Story in the Sprint Backlog. When a developer talks about what he did yesterday or plans to do tomorrow then the developer should be referring to a task. Stories are owned by the Product Owner and a story is all about business value. In contrast, the tasks are owned by the developer team and a task is all about implementation details. A story might take several days or weeks to complete. A task is something which a developer can complete in less than a day. Some teams get lazy about breaking stories into tasks. Neglecting to break stories into tasks can lead to “Never Ending Stories” If you don’t break a story into tasks, then you can’t know how much of a story has actually been completed because you don’t have a clear idea about the implementation steps required to complete the story. Scrumboard During the Daily Scrum, the developer team uses a Scrumboard to coordinate their work. A Scrumboard contains a list of the stories for the current Sprint, the tasks associated with each Story, and the state of each task. The developer team uses the Scrumboard so everyone on the team can see, at a glance, what everyone is working on. As a developer works on a task, the task moves from state to state and the state of the task is updated on the Scrumboard. Common task states are ToDo, In Progress, and Done. Some teams include additional task states such as Needs Review or Needs Testing. Some teams use a physical Scrumboard. In that case, you use index cards to represent the stories and the tasks and you tack the index cards onto a physical board. Using a physical Scrumboard has several disadvantages. A physical Scrumboard does not work well with a distributed team – for example, it is hard to share the same physical Scrumboard between Boston and Seattle. Also, generating reports from a physical Scrumboard is more difficult than generating reports from an online Scrumboard. Estimating Stories and Tasks Stakeholders in a project, the people investing in a project, need to have an idea of how a project is progressing and when the project will be completed. For example, if you are investing in creating an e-commerce site, you need to know when the site can be launched. It is not enough to just say that “the project will be done when it is done” because the stakeholders almost certainly have a limited budget to devote to the project. The people investing in the project cannot determine the business value of the project unless they can have an estimate of how long it will take to complete the project. Developers hate to give estimates. The reason that developers hate to give estimates is that the estimates are almost always completely made up. For example, you really don’t know how long it takes to build a shopping cart until you finish building a shopping cart, and at that point, the estimate is no longer useful. The problem is that writing code is much more like Finding a Cure for Cancer than Building a Brick Wall. Building a brick wall is very straightforward. After you learn how to add one brick to a wall, you understand everything that is involved in adding a brick to a wall. There is no additional research required and no surprises. If, on the other hand, I assembled a team of scientists and asked them to find a cure for cancer, and estimate exactly how long it will take, they would have no idea. The problem is that there are too many unknowns. I don’t know how to cure cancer, I need to do a lot of research here, so I cannot even begin to estimate how long it will take. So developers hate to provide estimates, but the Product Owner and other product stakeholders, have a legitimate need for estimates. Scrum resolves this conflict by using the idea of Story Points. Different teams use different units to represent Story Points. For example, some teams use shirt sizes such as Small, Medium, Large, and X-Large. Some teams prefer to use Coffee Cup sizes such as Tall, Short, and Grande. Finally, some teams like to use numbers from the Fibonacci series. These alternative units are converted into a Story Point value. Regardless of the type of unit which you use to represent Story Points, the goal is the same. Instead of attempting to estimate a Story in hours (which is doomed to failure), you use a much less fine-grained measure of work. A developer team is much more likely to be able to estimate that a Story is Small or X-Large than the exact number of hours required to complete the story. So you can think of Story Points as a compromise between the needs of the Product Owner and the developer team. When a Sprint starts, the developer team devotes more time to thinking about the Stories in a Sprint and the developer team breaks the Stories into Tasks. In Scrum, you estimate the work required to complete a Story by using Story Points and you estimate the work required to complete a task by using hours. The difference between Stories and Tasks is that you don’t create a task until you are just about ready to start working on a task. A task is something that you should be able to create within a day, so you have a much better chance of providing an accurate estimate of the work required to complete a task than a story. Burndown Charts In Scrum, you use Burndown charts to represent the remaining work on a project. You use Release Burndown charts to represent the overall remaining work for a project and you use Sprint Burndown charts to represent the overall remaining work for a particular Sprint. You create a Release Burndown chart by calculating the remaining number of uncompleted Story Points for the entire Product Backlog every day. The vertical axis represents Story Points and the horizontal axis represents time. A Sprint Burndown chart is similar to a Release Burndown chart, but it focuses on the remaining work for a particular Sprint. There are two different types of Sprint Burndown charts. You can either represent the remaining work in a Sprint with Story Points or with task hours (the following image, taken from Wikipedia, uses hours). When each Product Backlog Story is completed, the Release Burndown chart slopes down. When each Story or task is completed, the Sprint Burndown chart slopes down. Burndown charts typically do not always slope down over time. As new work is added to the Product Backlog, the Release Burndown chart slopes up. If new tasks are discovered during a Sprint, the Sprint Burndown chart will also slope up. The purpose of a Burndown chart is to give you a way to track team progress over time. If, halfway through a Sprint, the Sprint Burndown chart is still climbing a hill then you know that you are in trouble. Team Velocity Stakeholders in a project always want more work done faster. For example, the Product Owner for the e-commerce site wants the website to launch before tomorrow. Developers tend to be overly optimistic. Rarely do developers acknowledge the physical limitations of reality. So Project stakeholders and the developer team often collude to delude themselves about how much work can be done and how quickly. Too many software projects begin in a state of optimism and end in frustration as deadlines zoom by. In Scrum, this problem is overcome by calculating a number called the Team Velocity. The Team Velocity is a measure of the average number of Story Points which a team has completed in previous Sprints. Knowing the Team Velocity is important during the Sprint Planning meeting when the Product Owner and the developer team work together to determine the number of stories which can be completed in the next Sprint. If you know the Team Velocity then you can avoid committing to do more work than the team has been able to accomplish in the past, and your team is much more likely to complete all of the work required for the next Sprint. Scrum Master There are three roles in Scrum: the Product Owner, the developer team, and the Scrum Master. I’v e already discussed the Product Owner. The Product Owner is the one and only person who maintains the Product Backlog and prioritizes the stories. I’ve also described the role of the developer team. The members of the developer team do the work of implementing the stories by breaking the stories into tasks. The final role, which I have not discussed, is the role of the Scrum Master. The Scrum Master is responsible for ensuring that the team is following the Scrum process. For example, the Scrum Master is responsible for making sure that there is a Daily Scrum meeting and that everyone answers the standard three questions. The Scrum Master is also responsible for removing (non-technical) impediments which the team might encounter. For example, if the team cannot start work until everyone installs the latest version of Microsoft Visual Studio then the Scrum Master has the responsibility of working with management to get the latest version of Visual Studio as quickly as possible. The Scrum Master can be a member of the developer team. Furthermore, different people can take on the role of the Scrum Master over time. The Scrum Master, however, cannot be the same person as the Product Owner. Using SonicAgile SonicAgile (SonicAgile.com) is an online tool which you can use to manage your projects using Scrum. You can use the SonicAgile Product Backlog to create a prioritized list of stories. You can estimate the size of the Stories using different Story Point units such as Shirt Sizes and Coffee Cup sizes. You can use SonicAgile during the Sprint Planning meeting to select the Stories that you want to complete during a particular Sprint. You can configure Sprints to be any length of time. SonicAgile calculates Team Velocity automatically and displays a warning when you add too many stories to a Sprint. In other words, it warns you when it thinks you are overcommitting in a Sprint. SonicAgile also includes a Scrumboard which displays the list of Stories selected for a Sprint and the tasks associated with each story. You can drag tasks from one task state to another. Finally, SonicAgile enables you to generate Release Burndown and Sprint Burndown charts. You can use these charts to view the progress of your team. To learn more about SonicAgile, visit SonicAgile.com. Summary In this post, I described many of the basic concepts of Scrum. You learned how a Product Owner uses a Product Backlog to create a prioritized list of tasks. I explained why work is completed in Sprints so the developer team can be more productive. I also explained how a developer team uses the daily scrum to coordinate their work. You learned how the developer team uses a Scrumboard to see, at a glance, who is working on what and the state of each task. I also discussed Burndown charts. You learned how you can use both Release and Sprint Burndown charts to track team progress in completing a project. Finally, I described the crucial role of the Scrum Master – the person who is responsible for ensuring that the rules of Scrum are being followed. My goal was not to describe all of the concepts of Scrum. This post was intended to be an introductory overview. For a comprehensive explanation of Scrum, I recommend reading Ken Schwaber’s book Agile Project Management with Scrum: http://www.amazon.com/Agile-Project-Management-Microsoft-Professional/dp/073561993X/ref=la_B001H6ODMC_1_1?ie=UTF8&qid=1345224000&sr=1-1

    Read the article

  • How to safely reboot via First Boot script

    - by unixman
    With the cost and performance benefits of the SPARC T4 and SPARC T5 systems undeniably validated, the banking sector is actively moving to Solaris 11.  I was recently asked to help a banking customer of ours look at migrating some of their Solaris 10 logic over to Solaris 11.  While we've introduced a number of holistic improvements in Solaris 11, in terms of how we ease long-term software lifecycle management, it is important to appreciate that customers may not be able to move all of their Solaris 10 scripts and procedures at once; there are years of scripts that reflect fine-tuned requirements of proprietary banking software that gets layered on top of the operating system. One of these requirements is to go through a cycle of reboots, after the system is installed, in order to ensure appropriate software dependencies and various configuration files are in-place. While Solaris 10 introduced a facility that aids here, namely SMF, many of our customers simply haven't yet taken the time to take advantage of this - proceeding with logic that, while functional, without further analysis has an appearance of not being optimal in terms of taking advantage of all the niceties bundled in Solaris 11 at no extra cost. When looking at Solaris 11, we recognize that one of the vehicles that bridges the gap between getting the operating system image payload delivered, and the customized banking software installed, is a notion of a First Boot script.  I had a working example of this at one of the Oracle OpenWorld sessions a few years ago - we've since improved our documentation and have introduced sections where this is described in better detail.   If you're looking at this for the first time and you've not worked with IPS and SMF previously, you might get the sense that the tasks are daunting.   There is a set of technologies involved that are jointly engineered in order to make the process reliable, predictable and extensible. As you go down the path of writing your first boot script, you'll be faced with a need to wrap it into a SMF service and then packaged into a IPS package. The IPS package would then need to be placed onto your IPS repository, in order to subsequently be made available to all of your AI (Automated Install) clients (i.e. the systems that you're installing Solaris and your software onto).     With this blog post, I wanted to create a single place that outlines the entire process (simplistically), and provide a hint of how a good old "at" command may make the requirement of forcing an initial reboot handy. The syntax and references to commands here is based on running this on a version of Solaris 11 that has been updated since its initial release in 2011 (i.e. I am writing this on Solaris 11.1) Assuming you've built an AI server (see this How To article for an example), you might be asking yourself: "Ok, I've got some logic that I need executed AFTER Solaris is deployed and I need my own little script that would make that happen. How do I go about hooking that script into the Solaris 11 AI framework?"  You might start here, in Chapter 13 of the "Installing Oracle Solaris 11.1 Systems" guide, which talks about "Running a Custom Script During First Boot".  And as you do, you'll be confronted with command that might be unfamiliar to you if you're new to Solaris 11, like our dear new friend: svcbundle svcbundle is an aide to creating manifests and profiles.  It is awesome, but don't let its awesomeness overwhelm you. (See this How To article by my colleague Glynn Foster for a nice working example).  In order to get your script's logic integrated into the Solaris 11 deployment process, you need to wrap your (shell) script into 2 manifests -  a SMF service manifest and a IPS package manifest.  ....and if you're new to XML, well then -- buckle up We have some examples of small first boot scripts shown here, as templates to build upon. Necessary structure of the script, particularly in leveraging SMF interfaces, is key. I won't go into that here as that is covered nicely in the doc link above.    Let's say your script ends up looking like this (btw: if things appear to be cut-off in your browser, just select them, copy and paste into your editor and it'll be grabbed - the source gets captured eventhough the browser may not render it "correctly" - ah, computers). #!/bin/sh # Load SMF shell support definitions . /lib/svc/share/smf_include.sh # If nothing to do, exit with temporary disable completed=`svcprop -p config/completed site/first-boot-script-svc:default` [ "${completed}" = "true" ] && \ smf_method_exit $SMF_EXIT_TEMP_DISABLE completed "Configuration completed" # Obtain the active BE name from beadm: The active BE on reboot has an R in # the third column of 'beadm list' output. Its name is in column one. bename=`beadm list -Hd|nawk -F ';' '$3 ~ /R/ {print $1}'` beadm create ${bename}.orig echo "Original boot environment saved as ${bename}.orig" # ---- Place your one-time configuration tasks here ---- # For example, if you have to pull some files from your own pre-existing system: /usr/bin/wget -P /var/tmp/ $PULL_DOWN_ADDITIONAL_SCRIPTS_FROM_A_CORPORATE_SYSTEM /usr/bin/chmod 755 /var/tmp/$SCRIPTS_THAT_GOT_PULLED_DOWN_IN_STEP_ABOVE # Clearly the above 2 lines represent some logic that you'd have to customize to fit your needs. # # Perhaps additional things you may want to do here might be of use, like # (gasp!) configuring ssh server for root login and X11 forwarding (for testing), and the like... # # Oh and by the way, after we're done executing all of our proprietary scripts we need to reboot # the system in accordance with our operational software requirements to ensure all layered bits # get initialized properly and pull-in their own modules and components in the right sequence, # subsequently. # We need to set a "time bomb" reboot, that would take place upon completion of this script. # We already know that *this* script depends on multi-user-server SMF milestone, so it should be # safe for us to schedule a reboot for 5 minutes from now. The "at" job get scheduled in the queue # while our little script continues thru the rest of the logic. /usr/bin/at now + 5 minutes <<REBOOT /usr/bin/sync /usr/sbin/reboot REBOOT # ---- End of your customizations ---- # Record that this script's work is done svccfg -s site/first-boot-script-svc:default setprop config/completed = true svcadm refresh site/first-boot-script-svc:default smf_method_exit $SMF_EXIT_TEMP_DISABLE method_completed "Configuration completed"  ...and you're happy with it and are ready to move on. Where do you go and what do you do? The next step is creating the IPS package for your script. Since running the logic of your script constitutes a service, you need to create a service manifest. This is described here, in the middle of Chapter 13 of "Creating an IPS package for the script and service".  Assuming the name of your shell script is first-boot-script.sh, you could end up doing the following: $ cd some_working_directory_for_this_project$ mkdir -p proto/lib/svc/manifest/site$ mkdir -p proto/opt/site $ cp first-boot-script.sh proto/opt/site  Then you would create the service manifest  file like so: $ svcbundle -s service-name=site/first-boot-script-svc \ -s start-method=/opt/site/first-boot-script.sh \ -s instance-property=config:completed:boolean:false -o \ first-boot-script-svc-manifest.xml   ...as described here, and place it into the directory hierarchy above. But before you place it into the directory, make sure to inspect the manifest and adjust the appropriate service dependencies.  That is to say, you want to properly specify what milestone should be reached before your service runs.  There's a <dependency> section that looks like this, before you modify it: <dependency restart_on="none" type="service" name="multi_user_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user"/>  </dependency>  So if you'd like to have your service run AFTER the multi-user-server milestone has been reached (i.e. later, as multi-user-server has more dependencies then multi-user and our intent to reboot the system may have significant ramifications if done prematurely), you would modify that section to read:  <dependency restart_on="none" type="service" name="multi_user_server_dependency" grouping="require_all"> <service_fmri value="svc:/milestone/multi-user-server"/>  </dependency> Save the file and validate it: $ svccfg validate first-boot-script-svc-manifest.xml Assuming there are no errors returned, copy the file over into the directory hierarchy: $ cp first-boot-script-svc-manifest.xml proto/lib/svc/manifest/site Now that we've created the service manifest (.xml), create the package manifest (.p5m) file named: first-boot-script.p5m.  Populate it as follows: set name=pkg.fmri value=first-boot-script-AT-1-DOT-0,5.11-0 set name=pkg.summary value="AI first-boot script" set name=pkg.description value="Script that runs at first boot after AI installation" set name=info.classification value=\ "org.opensolaris.category.2008:System/Administration and Configuration" file lib/svc/manifest/site/first-boot-script-svc-manifest.xml \ path=lib/svc/manifest/site/first-boot-script-svc-manifest.xml owner=root \ group=sys mode=0444 dir path=opt/site owner=root group=sys mode=0755 file opt/site/first-boot-script.sh path=opt/site/first-boot-script.sh \ owner=root group=sys mode=0555 Now we are going to publish this package into a IPS repository. If you don't have one yet, don't worry. You have 2 choices: You can either  publish this package into your mirror of the Oracle Solaris IPS repo or create your own customized repo.  The best practice is to create your own customized repo, leaving your mirror of the Oracle Solaris IPS repo untouched.  From this point, you have 2 choices as well - you can either create a repo that will be accessible by your clients via HTTP or via NFS.  Since HTTP is how the default Solaris repo is accessed, we'll go with HTTP for your own IPS repo.   This nice and comprehensive How To by Albert White describes how to create multiple internal IPS repos for Solaris 11. We'll zero in on the basic elements for our needs here: We'll create the IPS repo directory structure hanging off a separate ZFS file system, and we'll tie it into an instance of pkg.depotd. We do this because we want our IPS repo to be accessible to our AI clients through HTTP, and the pkg.depotd SMF service bundled in Solaris 11 can help us do this. We proceed as follows: # zfs create rpool/export/MyIPSrepo # pkgrepo create /export/MyIPSrepo # svccfg -s pkg/server add MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg pkg application # svccfg -s pkg/server:MyIPSrepo setprop pkg/port=10081 # svccfg -s pkg/server:MyIPSrepo setprop pkg/inst_root=/export/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpg general framework # svccfg -s pkg/server:MyIPSrepo addpropvalue general/complete astring: MyIPSrepo # svccfg -s pkg/server:MyIPSrepo addpropvalue general/enabled boolean: true # svccfg -s pkg/server:MyIPSrepo setprop pkg/readonly=true # svccfg -s pkg/server:MyIPSrepo setprop pkg/proxy_base = astring: http://your_internal_websrvr/MyIPSrepo # svccfg -s pkg/server:MyIPSrepo setprop pkg/threads = 200 # svcadm refresh application/pkg/server:MyIPSrepo # svcadm enable application/pkg/server:MyIPSrepo Now that the IPS repo is created, we need to publish our package into it: # pkgsend publish -d ./proto -s /export/MyIPSrepo first-boot-script.p5m If you find yourself making changes to your script, remember to up-rev the version in the .p5m file (which is your IPS package manifest), and re-publish the IPS package. Next, you need to go to your AI install server (which might be the same machine) and modify the AI manifest to include a reference to your newly created package.  We do that by listing an additional publisher, which would look like this (replacing the IP address and port with your own, from the "svccfg" commands up above): <publisher name="firstboot"> <origin name="http://192.168.1.222:10081"/> </publisher>  Further down, in the  <software_data action="install">  section add: <name>pkg:/first-boot-script</name> Make sure to update your Automated Install service with the new AI manifest via installadm update-manifest command.  Don't forget to boot your client from the network to watch the entire process unfold and your script get tested.  Once the system makes the initial reboot, the first boot script will be executed and whatever logic you've specified in it should be executed, too, followed by a nice reboot. When the system comes up, your service should stay in a disabled state, as specified by the tailing lines of your SMF script - this is normal and should be left as is as it helps provide an auditing trail for you.   Because the reboot is quite a significant action for the system, you may want to add additional logic to the script that actually places and then checks for presence of certain lock files in order to avoid doing a reboot unnecessarily. You may also want to, alternatively, remove the SMF service entirely - if you're unsure of the potential for someone to try and accidentally enable that service -- eventhough its role in life is to only run once upon the system's first boot. That is how I spent a good chunk of my pre-Halloween time this week, hope yours was just as SPARCkly^H^H^H^H fun!    

    Read the article

  • sudo port install arm-elf-gcc3 fails with "No defined site for tag: gcc…"

    - by Scott Bayes
    Am trying to get the ARM plugin for Eclipse (http://sourceforge.net/projects/gnuarmeclipse/) going on iMac i7, OS 10.6.3, Xcode 3.2.2 (don't want to upgrade during my project). The plugin needs (IIRC) arm-elf-gcc3, which needs darwinports for "easy" install. Of course, probably due to leftovers when I moved from old MacBook to iMac, Darwin ports 1.8.2 wouldn't install till I built 1.7.1 from source and installed it. darwinports 1.8.1 appears to have been properly installed, but sudo port install arm-elf-gcc3 led to 5-10 minutes of dependencies installs, then the following, produced with port -d (starting from last dependency completion for brevity): DEBUG: Found Dependency: receipt exists for gettext DEBUG: Executing org.macports.main (arm-elf-gcc3) --- Fetching arm-elf-gcc3 DEBUG: Executing org.macports.fetch (arm-elf-gcc3) --- gcc-3.4.6.tar.bz2 doesn't seem to exist in /opt/local/var/macports/distfiles/gcc Error: No defined site for tag: gcc, using master_sites Error: Target org.macports.fetch returned: can't read "host": no such variable DEBUG: Backtrace: can't read "host": no such variable while executing "info exists seen($host)" (procedure "sortsites" line 25) invoked from within "sortsites fetch_urls" (procedure "portfetch::fetchfiles" line 49) invoked from within "portfetch::fetchfiles" (procedure "portfetch::fetch_main" line 16) invoked from within "$procedure $targetname" Warning: the following items did not execute (for arm-elf-gcc3): org.macports.activate org.macports.fetch org.macports.extract org.macports.checksum org.macports.patch org.macports.configure org.macports.build org.macports.destroot org.macports.install Error: Status 1 encountered during processing. (sorry if that's a mess, neither blockquote nor code sample tags seem to properly display cut/pasted text from Terminal.app in preview window). Can anyone advise me on how to get around this (or how to build/install arm-elf-gcc3 from source if necessary)? None of the darwinports FAQs or forums mentioned arm-elf-gcc3 anywhere that I saw.

    Read the article

  • Using SQL Alchemy and pyodbc with IronPython 2.6.1

    - by beargle
    I'm using IronPython and the clr module to retrieve SQL Server information via SMO. I'd like to retrieve/store this data in a SQL Server database using SQL Alchemy, but am having some trouble loading the pyodbc module. Here's the setup: IronPython 2.6.1 (installed at D:\Program Files\IronPython) CPython 2.6.5 (installed at D:\Python26) SQL Alchemy 0.6.1 (installed at D:\Python26\Lib\site-packages\sqlalchemy) pyodbc 2.1.7 (installed at D:\Python26\Lib\site-packages) I have these entries in the IronPython site.py to import CPython standard and third-party libraries: # Add CPython standard libs and DLLs import sys sys.path.append(r"D:\Python26\Lib") sys.path.append(r"D:\Python26\DLLs") sys.path.append(r"D:\Python26\lib-tk") sys.path.append(r"D:\Python26") # Add CPython third-party libs sys.path.append(r"D:\Python26\Lib\site-packages") # sqlite3 sys.path.append(r"D:\Python26\Lib\sqlite3") # Add SQL Server SMO sys.path.append(r"D:\Program Files\Microsoft SQL Server\100\SDK\Assemblies") import clr clr.AddReferenceToFile('Microsoft.SqlServer.Smo.dll') clr.AddReferenceToFile('Microsoft.SqlServer.SqlEnum.dll') clr.AddReferenceToFile('Microsoft.SqlServer.ConnectionInfo.dll') SQL Alchemy imports OK in IronPython, put I receive this error message when trying to connect to SQL Server: IronPython 2.6.1 (2.6.10920.0) on .NET 2.0.50727.3607 Type "help", "copyright", "credits" or "license" for more information. >>> import sqlalchemy >>> e = sqlalchemy.MetaData("mssql://") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\Python26\Lib\site-packages\sqlalchemy\schema.py", line 1780, in __init__ File "D:\Python26\Lib\site-packages\sqlalchemy\schema.py", line 1828, in _bind_to File "D:\Python26\Lib\site-packages\sqlalchemy\engine\__init__.py", line 241, in create_engine File "D:\Python26\Lib\site-packages\sqlalchemy\engine\strategies.py", line 60, in create File "D:\Python26\Lib\site-packages\sqlalchemy\connectors\pyodbc.py", line 29, in dbapi ImportError: No module named pyodbc This code works just fine in CPython, but it looks like the pyodbc module isn't accessible from IronPython. Any suggestions? I realize that this may not be the best way to approach the problem, so I'm open to tackling this a different way. Just wanted to get some experience with using SQL Alchemy and pyodbc.

    Read the article

  • possible to have a 'publish to facebook' button on my site?

    - by Haroldo
    I'm building a music events website and want to have a 'share this event' button which publishes the event details on facebook. this tool looks like exactly what i want: http://developers.facebook.com/tools.php?connect_wizard&wizard=stream_publish however, if i copy the code snippet to new file on my site, it doesn't work. I'm assuming there's a few lines of extra php/js i need somewhere? so far i have <body> <script src="http://static.ak.connect.facebook.com/js/api_lib/v0.4/FeatureLoader.js.php" type="text/javascript"></script> <script src="http://static.ak.connect.facebook.com/connect.php/en_GB" type="text/javascript"></script> <script type="text/javascript">FB.init("89bb37189bede9e30eb07a66b9a1c52a");</script> <script type="text/javascript"> function callPublish(msg, attachment, action_link) { FB.ensureInit(function () { FB.Connect.streamPublish('', attachment, action_link); }); } </script> <input type="button" onclick="callPublish('',{'name':'jkhkjhkjh','href':'http://www.headfirstbristol.co.uk/','description':'jhfg jkdfgkjdfgjkdfkgdfg df gdg dfg dfg dfg dfg dfg dfg dfg','media':[{'type':'image','src':'http://www.headfirstbristol.co.uk/site_files/images/hf-logo.jpg','href':'http://www.headfirstbristol.co.uk/'}]},null);return false;" value="Preview Dialog" />

    Read the article

  • python-social-auth AuthCanceled exception

    - by vero4ka
    I'm using python-social-auth in my Django application for authentication via Facebook. But when a user tries to login and when it's been refirected to Facebook app page clicks on "Cancel" button, appears the following exception: ERROR 2014-01-03 15:32:15,308 base :: Internal Server Error: /complete/facebook/ Traceback (most recent call last): File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/core/handlers/base.py", line 114, in get_response response = wrapped_callback(request, *callback_args, **callback_kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/django/views/decorators/csrf.py", line 57, in wrapped_view return view_func(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/utils.py", line 45, in wrapper return func(request, backend, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/apps/django_app/views.py", line 21, in complete redirect_name=REDIRECT_FIELD_NAME, *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/actions.py", line 54, in do_complete *args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/strategies/base.py", line 62, in complete return self.backend.auth_complete(*args, **kwargs) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 63, in auth_complete self.process_error(self.data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/facebook.py", line 56, in process_error super(FacebookOAuth2, self).process_error(data) File "/home/vera/virtualenv/myapp/local/lib/python2.7/site-packages/social/backends/oauth.py", line 312, in process_error raise AuthCanceled(self, data.get('error_description', '')) AuthCanceled: Authentication process canceled Is the any way to catch it Django?

    Read the article

  • ManyToManyField "table exist" error on syncdb

    - by Derek Reynolds
    When I include a ModelToModelField to one of my models the following error is thrown. Traceback (most recent call last): File "manage.py", line 11, in <module> execute_manager(settings) File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 362, in execute_manager utility.execute() File "/Library/Python/2.6/site-packages/django/core/management/__init__.py", line 303, in execute self.fetch_command(subcommand).run_from_argv(self.argv) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 195, in run_from_argv self.execute(*args, **options.__dict__) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 222, in execute output = self.handle(*args, **options) File "/Library/Python/2.6/site-packages/django/core/management/base.py", line 351, in handle return self.handle_noargs(**options) File "/Library/Python/2.6/site-packages/django/core/management/commands/syncdb.py", line 93, in handle_noargs cursor.execute(statement) File "/Library/Python/2.6/site-packages/django/db/backends/util.py", line 19, in execute return self.cursor.execute(sql, params) File "/Library/Python/2.6/site-packages/django/db/backends/mysql/base.py", line 84, in execute return self.cursor.execute(query, args) File "build/bdist.macosx-10.6-universal/egg/MySQLdb/cursors.py", line 173, in execute File "build/bdist.macosx-10.6-universal/egg/MySQLdb/connections.py", line 36, in defaulterrorhandler _mysql_exceptions.OperationalError: (1050, "Table 'orders_proof_approved_associations' already exists") Field definition: approved_associations = models.ManyToManyField(Association) Everything works fine when I remove the field, and the table is no where in site. Any thoughts as to why this would happen?

    Read the article

  • No acceleration for OpenGL and ImportError for modules that exist

    - by Aku
    I'm writing a program using wxPython and OpenGL. The program works, but without any antialiasing, and I get these error messages: (I'm using ArchLinux) INFO:OpenGL.acceleratesupport:No OpenGL_accelerate module loaded: No module named OpenGL_accelerate INFO:OpenGL.formathandler:Unable to load registered array format handler numpy: Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/OpenGL/arrays/formathandler.py", line 44, in loadPlugin plugin_class = entrypoint.load() File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 14, in load return importByName( self.import_path ) File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 28, in importByName module = __import__( ".".join(moduleName), {}, {}, moduleName) File "/usr/lib/python2.6/site-packages/OpenGL/arrays/numpymodule.py", line 11, in <module> raise ImportError( """No numpy module present: %s"""%(err)) ImportError: No numpy module present: No module named numpy INFO:OpenGL.formathandler:Unable to load registered array format handler numeric: Traceback (most recent call last): File "/usr/lib/python2.6/site-packages/OpenGL/arrays/formathandler.py", line 44, in loadPlugin plugin_class = entrypoint.load() File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 14, in load return importByName( self.import_path ) File "/usr/lib/python2.6/site-packages/OpenGL/plugins.py", line 28, in importByName module = __import__( ".".join(moduleName), {}, {}, moduleName) File "/usr/lib/python2.6/site-packages/OpenGL/arrays/numeric.py", line 15, in <module> raise ImportError( """No Numeric module present: %s"""%(err)) ImportError: No Numeric module present: No module named Numeric However, when I look into my site-packages folder, I see those modules present there. I have a wxPython demo program that uses GLCanvas, and it works fine, without any errors. My program is quite similar to the GLCanvas demo, involving just translations, rotations, drawing quads and some basic lighting. What am I doing wrong here? (The code is over 200 lines, if necessary I'll edit this and put it here.)

    Read the article

  • Django upload failing on request data read error

    - by Jake
    Hi All, I've got a Django app that accepts uploads from jQuery uploadify, a jQ plugin that uses flash to upload files and give a progress bar. Files under about 150k work, but bigger files always fail and almost always at around 192k (that's 3 chunks) completed, sometimes at around 160k. The Exception I get is below. exceptions.IOError request data read error File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 171, in _get_post self._load_post_and_files() File "/usr/lib/python2.4/site-packages/django/core/handlers/wsgi.py", line 137, in _load_post_and_files self._post, self._files = self.parse_file_upload(self.META, self.environ[\'wsgi.input\']) File "/usr/lib/python2.4/site-packages/django/http/__init__.py", line 124, in parse_file_upload return parser.parse() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 192, in parse for chunk in field_stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 468, in next for bytes in stream: File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 314, in next output = self._producer.next() File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 375, in next data = self.flo.read(self.chunk_size) File "/usr/lib/python2.4/site-packages/django/http/multipartparser.py", line 405, in read return self._file.read(num_bytes) When running locally on the Django development server, big files work. I've tried setting my FILE_UPLOAD_HANDLERS = ("django.core.files.uploadhandler.TemporaryFileUploadHandler",) in case it was the memory upload handler, but it made no difference. Does anyone know how to fix this?

    Read the article

< Previous Page | 146 147 148 149 150 151 152 153 154 155 156 157  | Next Page >