Search Results

Search found 45469 results on 1819 pages for 'open source contributions'.

Page 70/1819 | < Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >

  • Proper line-ending for an open-source PHP project

    - by Mahdi
    What is the proper line-ending preferences for an open-source web project? Obviously it includes source code of PHP, HTML, CSS and Javascript. The source code is managing via Github now, and there are Windows (8 & 7), Linux (Ubuntu) and OSX developers inside the team, which means all the major operating systems. P.S. We are using "Windows" CRLF line-ending, plus "UTF-8 without BOM" right now, without facing any problem, however I think it might be better to use "*nix/OSX" LF style. I heard some stories about the problems that caused by the additional "CR" on Linux or OS X.

    Read the article

  • Replacing sick NTP server source and re-synching (with internal time currently 2 minutes late)

    - by l0c0b0x
    One of the external NTP servers (the primary one--currently) we're using as source seems to not be responding to NTP calls. Unfortunately, on our core router (Cisco 6509), the NTP functionality hasn't switched to the secondary NTP external server as it was expected. As a result, our core router which is pretty much our main internal NTP source is 2 minutes late. I'm planning to fix the external router issue by making the external NTP source be the one currently working. I'm wondering, how much will a 2 minute change affect my users and services? Specially since these days, we're heavily relying on certificate-based authentication. We're a Windows/Cisco shop. Internal NTP setup: [Core Router 1 / Cisco 6509]: looking out to two external NTP servers (in which the primary one is not responding to NTP calls) [Core Router 2]: Synching with Core router 1 (primary), working external router (secondary) [Other Cisco network devices]: Synching with Core router 1 (primary), core router 2 (secondary) [Domain controller(s)]: Synching with Core router 1 [All windows clients/servers]: Synching with domain controllers

    Read the article

  • apt-get for a package with when "contrib/source/Sources" is not found correctly

    - by Stuart Woodward
    I tried to install Webmin on Ubuntu by following the instructions on http://www.webmin.com/deb.html. Using the Update Manager GUI I added the repositories and the key deb http://download.webmin.com/download/repository sarge contrib deb http://webmin.mirror.somersettechsolutions.co.uk/repository sarge contrib However when I do an: apt-get update apt-get install webmin I get the error: W: Failed to fetch http://download.webmin.com/download/repository/dists/sarge/Release Unable to find expected entry 'contrib/source/Sources' in Release file (Wrong sources.list entry or malformed file) W: Failed to fetch http://webmin.mirror.somersettechsolutions.co.uk/repository/dists/sarge/Release Unable to find expected entry 'contrib/source/Sources' in Release file (Wrong sources.list entry or malformed file) Looing at the page at the URL I can see: 7029066c27ac6f5ef18d660d5741979a 20 contrib/source/Sources.gz Is the error caused by the fact that the Sources are compressed with gzip or am I doing something wrong?

    Read the article

  • Looking for an open source real-time network analysis program

    - by JrSysAdmin
    Can somebody recommend an open source real-time network analysis program? What I'm looking for the program to do is display a graph of bandwidth usage by IP within our internal network that can quickly be viewed any time we need to (typically when we want to quickly find out who is utilizing high amounts of bandwidth and slowing down the network). We ideally simply want to hook up a monitor on the wall of our server room to a system whose NIC will be in permissive mode to log all network activity in a visual manner which can easily be seen and running 24/7. Prefer open source as I do not have a budget for this project and prefer open source projects in general. I'd also prefer for this to be available for CentOS but any linux distro or Windows OS would be acceptable. Thanks!

    Read the article

  • Discount Codes Galore

    - by Cassandra Clark
    Saving money is at the top of everyones list right now. With this in mind the Oracle Technology Network team has compiled a list of discounts available at the Oracle Store. We are also introducing an Oracle Technology Network member discount from O'Reilly Media. If you subscribe to any of the Oracle Technology newsletters you also saw special discounts from CRC Press, Packt Publishing and Apress. We are going to do our best to bring you more offers like this every month. Now on to the discounts... Oracle Store offers - all below expiring May 31st 2010. Don't miss out! Expand Your Productivity with Oracle Open Office and Save 15%? Enter OTNOffice at checkout. Buy Now! Drive Business Agility and Performance with Industry-leading Oracle Database Management Packs.  Save 10% when you purchase them at the Oracle Store. Enter OTNDBMP at checkout. Buy Now! 15% Savings on Oracle Virtualization and Unbreakable Linux Support at the Oracle Store Enter code OTNLinuxVM at checkout. Buy Now! 20% Savings on Oracle SQL Developer Data Modeler Use OTNSQL at checkout. Buy Now! O'Reilly Oracle Technology Network Member Offer O'Reilly is generously offering Oracle Technology Network Members 35% off for print books and 40% off of eBooks. Browse Oracle titles at- http://oreilly.com/pub/topic/oracle. Use discount code TECNT at checkout.

    Read the article

  • Using SQL Source Control with Fortress or Vault &ndash; Part 2

    - by AjarnMark
    In Part 1, I started talking about using Red-Gate’s newest version of SQL Source Control and how I really like it as a viable method to source control your database development.  It looks like this is going to turn into a little series where I will explain how we have done things in the past, and how life is different with SQL Source Control.  I will also explain some of my philosophy and methodology around deployment with these tools.  But for now, let’s talk about some of the good and the bad of the tool itself. More Kudos and Features I mentioned previously how impressed I was with the responsiveness of Red-Gate’s team.  I have been having an ongoing email conversation with Gyorgy Pocsi, and as I have run into problems or requested things behave a little differently, it has not been more than a day or two before a new Build is ready for me to download and test.  Quite impressive! I’m sure much of the requests I put in were already in the plans, so I can’t really take credit for them, but throughout this conversation, Red-Gate has implemented several features that were not in the first Early Access version.  Those include: Honoring the Fortress configuration option to require Work Item (Bug) IDs on check-ins. Adding the check-in comment text as a comment to the Work Item. Adding the list of checked-in files, along with the Fortress links for automatic History and DIFF view Updating the status of a Work Item on check-in (e.g. setting the item to Complete or, in our case “Dev-Complete”) Support for the Fortress 2.0 API, and not just the Vault Pro 5.1 API.  (See later notes regarding support for Fortress 2.0). These were all features that I felt we really needed to have in-place before I could honestly consider converting my team to using SQL Source Control on a regular basis.  Now that I have those, my only excuse is not wanting to switch boats on the team mid-stream.  So when we wrap up our current release in a few weeks, we will make the jump.  In the meantime, I will continue to bang on it to make sure it is stable.  It passed one test for stability when I did a test load of one of our larger database schemas into Fortress with SQL Source Control.  That database has about 150 tables, 200 User-Defined Functions and nearly 900 Stored Procedures.  The initial load to source control went smoothly and took just a brief amount of time. Warnings Remember that this IS still in pre-release stage and while I have not had any problems after that first hiccup I wrote about last time, you still need to treat it with a healthy respect.  As I understand it, the RTM is targeted for February.  There are a couple more features that I hope make it into the final release version, but if not, they’ll probably be coming soon thereafter.  Those are: A Browse feature to let me lookup the Work Item ID instead of having to remember it or look back in my Item details.  This is just a matter of convenience. I normally have my Work Item list open anyway, so I can easily look it up, but hey, why not make it even easier. A multi-line comment area.  The current space for writing check-in comments is a single-line text box.  I would like to have a multi-line space as I sometimes write lengthy commentary.  But I recognize that it is a struggle to get most developers to put in more than the word “fixed” as their comment, so this meets the need of the majority as-is, and it’s not a show-stopper for us. Merge.  SQL Source Control currently does not have a Merge feature.  If two or more people make changes to the same database object, you will get a warning of the conflict and have to choose which one wins (and then manually edit to include the others’ changes).  I think it unlikely you will run into actual conflicts in Stored Procedures and Functions, but you might with Views or Tables.  This will be nice to have, but I’m not losing any sleep over it.  And I have multiple tools at my disposal to do merges manually, so really not a show-stopper for us. Automation has its limits.  As cool as this automation is, it has its limits and there are some changes that you will be better off scripting yourself.  For example, if you are refactoring table definitions, and want to change a column name, you can write that as a quick sp_rename command and preserve the data within that column.  But because this tool is looking just at a before and after picture, it cannot tell that you just renamed a column.  To the tool, it looks like you dropped one column and added another.  This is not a knock against Red-Gate.  All automated scripting tools have this issue, unless the are actively monitoring your every step to know exactly what you are doing.  This means that when you go to Deploy your changes, SQL Compare will script the change as a column drop and add, or will attempt to rebuild the entire table.  Unfortunately, neither of these approaches will preserve the existing data in that column the way an sp_rename will, and so you are better off scripting that change yourself.  Thankfully, SQL Compare will produce warnings about the potential loss of data before it does the actual synchronization and give you a chance to intercept the script and do it yourself. Also, please note that the current official word is that SQL Source Control supports Vault Professional 5.1 and later.  Vault Professional is the new name for what was previously known as Fortress.  (You can read about the name change on SourceGear’s site.)  The last version of Fortress was 2.x, and the API for Fortress 2.x is different from the API for Vault Pro.  At my company, we are currently running Fortress 2.0, with plans to upgrade to Vault Pro early next year.  Gyorgy was able to come up with a work-around for me to be able to use SQL Source Control with Fortress 2.0, even though it is not officially supported.  If you are using Fortress 2.0 and want to use SQL Source Control, be aware that this is not officially supported, but it is working for us, and you can probably get the work-around instructions from Red-Gate if you’re really, really nice to them. Upcoming Topics Some of the other topics I will likely cover in this series over the next few weeks are: How we used to do source control back in the old days (a few weeks ago) before SQL Source Control was available to Vault users What happens when you restore a database that is linked to source control Handling multiple development branches of source code Concurrent Development practices and handling Conflicts Deployment Tips and Best Practices A recap after using the tool for a while

    Read the article

  • GitHub: Are there external tools for managing issues list vs. project backlog

    - by DXM
    Recently I posted one of my the projects1 on GitHub and as I was exploring capabilities of the site, I noticed they have a rather decent issue tracking section. I want to use that section as a) other people can report bugs if they'd like and b) other people can see which bugs I'm aware of. However, as others have noted, issues list cannot be prioritized in order to create a project backlog. For now my backlog has been a text file, but I'd like to be able to have it integrated so the same information isn't maintained in different places. Having a fully ordered list, which is something we also practice at work, has been very useful as I can open one file, start with line 1 and fire off 2 or 3 items in one sitting without having to go back to a full issues/stories bucket. GitHub doesn't offer this. What GitHub does offer is a very nice and clean API so issues can easily be exported into anything else. I've searched to see if there are other websites (like Trello) that integrate with GitHub issues, but did not find anything. Does anyone know of such a product, service or offline tool? Those that use GitHub, what is your experience in managing backlog? I kinda hate the idea of manually managing two disconnected lists like some people seem to be doing with Wiki project pages. 1 - are shameless plugs allowed no this site? Searched but didn't find a definite answer. If it's bad practice, STOP and don't read further As a developer I got sick and tired of navigating to same set of folders 30 times a day, so I wrote a little, auto-collapsible utility that gets stuck to the desktop and allows easy access to the folders you constantly use.

    Read the article

  • Can not open port 3306 on Ubuntu using iptables

    - by user94626
    I am trying to open port 3306 (for remote mysql connections) on my ubuntu 12.04 server machine but for the life of me can't get the damned thing to work! Here is what I did: 1) list current firewall rules: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 225 16984 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 220 69605 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 486 54824 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 4 208 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 4 208 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 735 182K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 225 16984 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 2) try to connect from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect 3) try to add a new rule to iptables: iptables -A INPUT -i eth0 -p tcp -m tcp --dport 3306 -j ACCEPT 4) make sure the new rule is added: $> sudo iptables -nL -v output: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 359 25972 fail2ban-ssh tcp -- * * 0.0.0.0/0 0.0.0.0/0 multiport dports 22 251 78665 ACCEPT all -- lo * 0.0.0.0/0 0.0.0.0/0 0 0 REJECT all -- lo * 0.0.0.0/0 127.0.0.0/8 reject-with icmp-port-unreachable 628 64420 ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 state RELATED,ESTABLISHED 1 60 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:80 19 988 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 tcp dpt:443 1 52 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 state NEW tcp dpt:22 0 0 ACCEPT icmp -- * * 0.0.0.0/0 0.0.0.0/0 icmptype 8 5 260 LOG all -- * * 0.0.0.0/0 0.0.0.0/0 limit: avg 5/min burst 5 LOG flags 0 level 7 prefix "iptables denied: " 5 260 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable 0 0 ACCEPT tcp -- eth0 * 0.0.0.0/0 0.0.0.0/0 tcp dpt:3306 Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 0 0 REJECT all -- * * 0.0.0.0/0 0.0.0.0/0 reject-with icmp-port-unreachable Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination 919 213K ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 Chain fail2ban-ssh (1 references) pkts bytes target prot opt in out source destination 359 25972 RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 which appears to be the case (last line in "Chain INPUT" section). 5) try to connect again from remote machine: $> mysql -u root -p -h x.x.x.x output: timeout.... failed to connect which is failing again. 6) try to flush all rules: $> sudo iptables -F 7) this time I CAN CONNECT. 8) reboot server and try to connect, FAILURE. I suspect since the new rule is being appended at the end it will have no effect as there appears to be a "reject all" sort of rule before it. If this is the case, how to make sure the new rule is added in the right order? Otherwise, what am I missing? Please help.

    Read the article

  • MvcExtensions - ActionFilter

    - by kazimanzurrashid
    One of the thing that people often complains is dependency injection in Action Filters. Since the standard way of applying action filters is to either decorate the Controller or the Action methods, there is no way you can inject dependencies in the action filter constructors. There are quite a few posts on this subject, which shows the property injection with a custom action invoker, but all of them suffers from the same small bug (you will find the BuildUp is called more than once if the filter implements multiple interface e.g. both IActionFilter and IResultFilter). The MvcExtensions supports both property injection as well as fluent filter configuration api. There are a number of benefits of this fluent filter configuration api over the regular attribute based filter decoration. You can pass your dependencies in the constructor rather than property. Lets say, you want to create an action filter which will update the User Last Activity Date, you can create a filter like the following: public class UpdateUserLastActivityAttribute : FilterAttribute, IResultFilter { public UpdateUserLastActivityAttribute(IUserService userService) { Check.Argument.IsNotNull(userService, "userService"); UserService = userService; } public IUserService UserService { get; private set; } public void OnResultExecuting(ResultExecutingContext filterContext) { // Do nothing, just sleep. } public void OnResultExecuted(ResultExecutedContext filterContext) { Check.Argument.IsNotNull(filterContext, "filterContext"); string userName = filterContext.HttpContext.User.Identity.IsAuthenticated ? filterContext.HttpContext.User.Identity.Name : null; if (!string.IsNullOrEmpty(userName)) { UserService.UpdateLastActivity(userName); } } } As you can see, it is nothing different than a regular filter except that we are passing the dependency in the constructor. Next, we have to configure this filter for which Controller/Action methods will execute: public class ConfigureFilters : ConfigureFiltersBase { protected override void Configure(IFilterRegistry registry) { registry.Register<HomeController, UpdateUserLastActivityAttribute>(); } } You can register more than one filter for the same Controller/Action Methods: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(); You can register the filters for a specific Action method instead of the whole controller: registry.Register<HomeController, UpdateUserLastActivityAttribute, CompressAttribute>(c => c.Index()); You can even set various properties of the filter: registry.Register<ControlPanelController, CustomAuthorizeAttribute>( attribute => { attribute.AllowedRole = Role.Administrator; }); The Fluent Filter registration also reduces the number of base controllers in your application. It is very common that we create a base controller and decorate it with action filters and then we create concrete controller(s) so that the base controllers action filters are also executed in the concrete controller. You can do the  same with a single line statement with the fluent filter registration: Registering the Filters for All Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type))); Registering Filters for selected Controllers: registry.Register<ElmahHandleErrorAttribute>(new TypeCatalogBuilder().Add(GetType().Assembly).Include(type => typeof(Controller).IsAssignableFrom(type) && (type.Name.StartsWith("Home") || type.Name.StartsWith("Post")))); You can also use the built-in filters in the fluent registration, for example: registry.Register<HomeController, OutputCacheAttribute>(attribute => { attribute.Duration = 60; }); With the fluent filter configuration you can even apply filters to controllers that source code is not available to you (may be the controller is a part of a third part component). That’s it for today, in the next post we will discuss about the Model binding support in MvcExtensions. So stay tuned.

    Read the article

  • Who owns the IP rights of the software without written employment contract? Employer or employee? [closed]

    - by P T
    I am a software engineer who got an idea, and developed alone an integrated ERP software solution over the past 2 years. I got the idea and coded much of the software in my personal time, utilizing my own resources, but also as intern/employee at small wholesale retailer (company A). I had a verbal agreement with the company that I could keep the IP rights to the code and the company would have the "shop rights" to use "a copy" of the software without restrictions. Part of this agreement was that I was heavily underpaid to keep the rights. Recently things started to take a down turn in the company A as the company grew fairly large and new head management was formed, also new partners were brought in. The original owners distanced themselves from the business, and the new "greedy" group indicated that they want to claim the IP rights to my software, offering me a contract that would split the IP ownership into 50% co-ownership, completely disregarding the initial verbal agreements. As of now there was no single written job description and agreement/contract/policy that I signed with the company A, I signed only I-9 and W-4 forms. I now have an opportunity to leave the company A and form a new business with 2 partners (Company B), obviously using the software as the primary tool. There would be no direct conflict of interest as the company A sells wholesale goods. My core question is: "Who owns the code without contract? Me or the company A? (in FL, US)" Detailed questions: I am familiar with the "shop rights", I don't have any problem leaving a copy of the code in the company for them to use/enhance to run their wholesale business. What worries me, Can the company A make any legal claims to the software/code/IP and potential derived profits/interests after I leave and form a company B? Can applying for a copyright of the code at http://www.copyright.gov in my name prevent any legal disputes in the future? Can I use it as evidence for legal defense? Could adding a note specifying the company A as exclusive license holder clarify the arrangements? If I leave and the company A sues me, what evidence would they use against me? On what basis would the sue since their business is in completely different industry than software (wholesale goods). Every single source file was created/stored on my personal computer with proper documentation including a copyright notice with my credentials (name/email/addres/phone). It's also worth noting that I develop significant part of the software prior to my involvement with the company A as student. If I am forced to sign a contract and the company A doesn't honor the verbal agreement, making claims towards the ownership, what can I do settle the matter legally? I like to avoid legal process altogether as my budget for court battles is extremely limited at the moment. Would altering the code beyond recognition and using it for the company B prevent the company A make any copyright claims? My common sense tells me that what I developed is by default mine in terms of IP, unless there is a signed legal agreement stating otherwise. But looking online it may be completely backwards, this really worries me. I understand that this is not legal advice, and I know to get the ultimate answer I need to hire a lawyer. I am only hoping to get some valuable input/experience/advice/opinion from those who were in similar situation or are familiar with the topic. Thank you, PT

    Read the article

  • Open Office crashes, recovers, crashes again

    - by Daniel R Hicks
    After completely reinstalling my laptop due to apparent registry corruption, I've encountered a problem with Open Office: I open a simple Calc spreadsheet, it comes up normally, but then after anywhere from 5 seconds to several minutes (without even touching the Calc window) OO crashes, then comes up through recovery. If I let it "recover" it will do so and bring the spreadsheet up again, only to repeat the crash scenario again. If I kept clicking "OK" it would apparently do this all day. I reinstalled OO once and the problem went away for awhile, but it came back. I then attempted to "reset" my profile (ie, rename the OO user directory in App Data), but OO crashed during the first startup after that, then resumed the original behavior. If I open the same file using Excel it complains of errors in the file, and "recovers" them, but the "error report" it generates contains no details. If I save the "recovered" file then OO Calc will open it, but the problem returns after saving again. Any ideas? (The system is Vista SP2, running OO 3.4.1) How to reproduce: Start Open Office Calc. Save workspace as "CrashTest.ods" From Task Manager kill Open Office (soffice.exe/bin -- one of each) Double click on the saved "CrashTest.ods" in Explorer. OO puts up a message that recovery will occur -- allow it. When the Calc window comes up, don't touch it -- just wait about 10 seconds. Calc window closes and OO puts up a message that recovery will occur -- from now on the sequence will repeat. I suspect this behavior is limited to a few (recent) versions of OO, and very possibly only Calc. Reported as Open Office Bug 1211094. Sigh!! As much as it irritates me, I'm having to switch over to Excel for several things I used to do with Calc. Excel has a miserable UI, but at least it says up for longer than 10 seconds.

    Read the article

  • The ugly evolution of running a background operation in the context of an ASP.NET app

    - by Jeff
    If you’re one of the two people who has followed my blog for many years, you know that I’ve been going at POP Forums now for over almost 15 years. Publishing it as an open source app has been a big help because it helps me understand how people want to use it, and having it translated to six languages is pretty sweet. Despite this warm and fuzzy group hug, there has been an ugly hack hiding in there for years. One of the things we find ourselves wanting to do is hide some kind of regular process inside of an ASP.NET application that runs periodically. The motivation for this has always been that a lot of people simply don’t have a choice, because they’re running the app on shared hosting, or don’t otherwise have access to a box that can run some kind of regular background service. In POP Forums, I “solved” this problem years ago by hiding some static timers in an HttpModule. Truthfully, this works well as long as you don’t run multiple instances of the app, which in the cloud world, is always a possibility. With the arrival of WebJobs in Azure, I’m going to solve this problem. This post isn’t about that. The other little hacky problem that I “solved” was spawning a background thread to queue emails to subscribed users of the forum. This evolved quite a bit over the years, starting with a long running page to mail users in real-time, when I had only a few hundred. By the time it got into the thousands, or tens of thousands, I needed a better way. What I did is launched a new thread that read all of the user data in, then wrote a queued email to the database (as in, the entire body of the email, every time), with the properly formatted opt-out link. It was super inefficient, but it worked. Then I moved my biggest site using it, CoasterBuzz, to an Azure Website, and it stopped working. So let’s start with the first stupid thing I was doing. The new thread was simply created with delegate code inline. As best I can tell, Azure Websites are more aggressive about garbage collection, because that thread didn’t queue even one message. When the calling server response went out of scope, so went the magic background thread. Duh, all I had to do was move the thread to a private static variable in the class. That’s the way I was able to keep stuff running from the HttpModule. (And yes, I know this is still prone to failure, particularly if the app recycles. For as infrequently as it’s used, I have not, however, experienced this.) It was still failing, but this time I wasn’t sure why. It would queue a few dozen messages, then die. Running in Azure, I had to turn on the application logging and FTP in to see what was going on. That led me to a helper method I was using as delegate to build the unsubscribe links. The idea here is that I didn’t want yet another config entry to describe the base URL, appended with the right path that would match the routing table. No, I wanted the app to figure it out for you, so I came up with this little thing: public static string FullUrlHelper(this Controller controller, string actionName, string controllerName, object routeValues = null) { var helper = new UrlHelper(controller.Request.RequestContext); var requestUrl = controller.Request.Url; if (requestUrl == null) return String.Empty; var url = requestUrl.Scheme + "://"; url += requestUrl.Host; url += (requestUrl.Port != 80 ? ":" + requestUrl.Port : ""); url += helper.Action(actionName, controllerName, routeValues); return url; } And yes, that should have been done with a string builder. This is useful for sending out the email verification messages, too. As clever as I thought I was with this, I was using a delegate in the admin controller to format these unsubscribe links for tens of thousands of users. I passed that delegate into a service class that did the email work: Func<User, string> unsubscribeLinkGenerator = user => this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = user.UserID, key = _profileService.GetUnsubscribeHash(user) }); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); Cool, right? Actually, not so much. If you look back at the helper, this delegate then will depend on the controller context to learn the routing and format for the URL. As you might have guessed, those things were turning null after a few dozen formatted links, when the original request to the admin controller went away. That this wasn’t already happening on my dedicated server is surprising, but again, I understand why the Azure environment might be eager to reclaim a thread after servicing the request. It’s already inefficient that I’m building the entire email for every user, but going back to check the routing table for the right link every time isn’t a win either. I put together a little hack to look up one generic URL, and use that as the basis for a string format. If you’re wondering why I didn’t just use the curly braces up front, it’s because they get URL formatted: var baseString = this.FullUrlHelper("Unsubscribe", AccountController.Name, new { id = "--id--", key = "--key--" }); baseString = baseString.Replace("--id--", "{0}").Replace("--key--", "{1}"); Func unsubscribeLinkGenerator = user => String.Format(baseString, user.UserID, _profileService.GetUnsubscribeHash(user)); _mailingListService.MailUsers(subject, body, htmlBody, unsubscribeLinkGenerator); And wouldn’t you know it, the new solution works just fine. It’s still kind of hacky and inefficient, but it will work until this somehow breaks too.

    Read the article

  • How do I install Open vSwitch?

    - by Lorin Hochstein
    How do I install Open vSwitch on raring? I can't find any official Ubuntu docs on this anywhere. DevStack seems to do this: kernel_version=`cat /proc/version | cut -d " " -f3` apt-get install make fakeroot dkms openvswitch-switch openvswitch-datapath-dkms linux-headers-$kernel_version On the other hand, this blog does this: apt-get install openvswitch-datapath-source openvswitch-common openvswitch-switch

    Read the article

  • Real-world SignalR example, ditching ghetto long polling

    - by Jeff
    One of the highlights of BUILD last week was the announcement that SignalR, a framework for real-time client to server (or cloud, if you will) communication, would be a real supported thing now with the weight of Microsoft behind it. Love the open source flava! If you aren’t familiar with SignalR, watch this BUILD session with PM Damian Edwards and dev David Fowler. Go ahead, I’ll wait. You’ll be in a happy place within the first ten minutes. If you skip to the end, you’ll see that they plan to ship this as a real first version by the end of the year. Insert slow clap here. Writing a few lines of code to move around a box from one browser to the next is a way cool demo, but how about something real-world? When learning new things, I find it difficult to be abstract, and I like real stuff. So I thought about what was in my tool box and the decided to port my crappy long-polling “there are new posts” feature of POP Forums to use SignalR. A few versions back, I added a feature where a button would light up while you were pecking out a reply if someone else made a post in the interim. It kind of saves you from that awkward moment where someone else posts some snark before you. While I was proud of the feature, I hated the implementation. When you clicked the reply button, it started polling an MVC URL asking if the last post you had matched the last one the server, and it did it every second and a half until you either replied or the server told you there was a new post, at which point it would display that button. The code was not glam: // in the reply setup PopForums.replyInterval = setInterval("PopForums.pollForNewPosts(" + topicID + ")", 1500); // called from the reply setup and the handler that fetches more posts PopForums.pollForNewPosts = function (topicID) { $.ajax({ url: PopForums.areaPath + "/Forum/IsLastPostInTopic/" + topicID, type: "GET", dataType: "text", data: "lastPostID=" + PopForums.currentTopicState.lastVisiblePost, success: function (result) { var lastPostLoaded = result.toLowerCase() == "true"; if (lastPostLoaded) { $("#MorePostsBeforeReplyButton").css("visibility", "hidden"); } else { $("#MorePostsBeforeReplyButton").css("visibility", "visible"); clearInterval(PopForums.replyInterval); } }, error: function () { } }); }; What’s going on here is the creation of an interval timer to keep calling the server and bugging it about new posts, and setting the visibility of a button appropriately. It looks like this if you’re monitoring requests in FireBug: Gross. The SignalR approach was to call a message broker when a reply was made, and have that broker call back to the listening clients, via a SingalR hub, to let them know about the new post. It seemed weird at first, but the server-side hub’s only method is to add the caller to a group, so new post notifications only go to callers viewing the topic where a new post was made. Beyond that, it’s important to remember that the hub is also the means to calling methods at the client end. Starting at the server side, here’s the hub: using Microsoft.AspNet.SignalR.Hubs; namespace PopForums.Messaging { public class Topics : Hub { public void ListenTo(int topicID) { Groups.Add(Context.ConnectionId, topicID.ToString()); } } } Have I mentioned how awesomely not complicated this is? The hub acts as the channel between the server and the client, and you’ll see how JavaScript calls the above method in a moment. Next, the broker class and its associated interface: using Microsoft.AspNet.SignalR; using Topic = PopForums.Models.Topic; namespace PopForums.Messaging { public interface IBroker { void NotifyNewPosts(Topic topic, int lasPostID); } public class Broker : IBroker { public void NotifyNewPosts(Topic topic, int lasPostID) { var context = GlobalHost.ConnectionManager.GetHubContext<Topics>(); context.Clients.Group(topic.TopicID.ToString()).notifyNewPosts(lasPostID); } } } The NotifyNewPosts method uses the static GlobalHost.ConnectionManager.GetHubContext<Topics>() method to get a reference to the hub, and then makes a call to clients in the group matched by the topic ID. It’s calling the notifyNewPosts method on the client. The TopicService class, which handles the reply data from the MVC controller, has an instance of the broker new’d up by dependency injection, so it took literally one line of code in the reply action method to get things moving. _broker.NotifyNewPosts(topic, post.PostID); The JavaScript side of things wasn’t much harder. When you click the reply button (or quote button), the reply window opens up and fires up a connection to the hub: var hub = $.connection.topics; hub.client.notifyNewPosts = function (lastPostID) { PopForums.setReplyMorePosts(lastPostID); }; $.connection.hub.start().done(function () { hub.server.listenTo(topicID); }); The important part to look at here is the creation of the notifyNewPosts function. That’s the method that is called from the server in the Broker class above. Conversely, once the connection is done, the script calls the listenTo method on the server, letting it know that this particular connection is listening for new posts on this specific topic ID. This whole experiment enables a lot of ideas that would make the forum more Facebook-like, letting you know when stuff is going on around you.

    Read the article

  • GNU/Linux interactive table content GUI editor?

    - by sdaau
    I often find myself in the need to gather data (say from the internet), into a table, for comparison reasons. I usually need the final table output in HTML or MediaWiki mostly, but often times also Latex. My biggest problem is that I often forget the correct table syntax for these markup languages, as well as what needs to be properly escaped in the inline data, for the table to render correctly. So, I often wish there was a GUI application, which provides a tabular framework - which I could stick "Always on Top" as a desktop window, and I could paste content into specific cells - before finally exporting the table as a code in the correct language. One application that partially allows this is Open/LibreOffice calc: The good thing here is that: I can drag and drop browser content into a specifically targeted table cell (here B2) "Rich" text / HTML code gets pasted For long content, the cell (column) width stays put as it originally was The bad thing is, that: when the cell height (due to content size) becomes larger than the calc window, it becomes nearly impossible to scroll calc contents up and down (at least with the mousewheel), as the view gets reset to top-right corner of the selected cell calc shows an "endless"/unlimited field of cells, so not exactly a "table" - which I find visually very confusing (and cognitively taxing) Can only export table to HTML What I would need is an application that: Allows for a limited size table, but with quick adding of rows and columns (e.g. via corresponding + buttons) Allows for quick setup of row and column height and width (as well as table size) Stays put at those sizes, regardless of size of content pasted in; if cell content overflows, cell scrollbars are shown (cell content could be possibly re-edited in a separate/new window); if table overflows over window size, window scrollbars are shown Exports table in multiple formats (I'd need both HTML and mediawiki), properly escaping cell content for each (possibility to strip HTML tags from content pasted in cells, to get plain text, is a plus) Targeting a specific cell in the table for the content paste operation is a must - it doesn't have to be drag'n'drop though, a right click over a cell with "Paste content" is enough. I'd also want the ability to click in a specific cell and type in (plain text) content immediately. So, my question is: is there an application out there that already does something like this? The reason I'm asking is that - as the screenshots show - for instance Libre/OpenOffice allows it, but only somewhat (as using it for that purpose is tedious). I know there exist some GUI editors for Linux (both for UI like guile or HTML like amaya); but I don't know them enough to pinpoint if any of them would offer this kind of functionality (and at least in my searches, that kind of functionality, if present in diverse software, seems not to be advertised). Note I'm not interested in styling an HTML table, which is why I haven't used "table designer" in the title, but "table editor" (in lack of better terms) - I'm interested in (quickly) adjusting row/column size of the table, and populating it with pasted data (which is possibly HTML) in a GUI; and finally exporting such a table as self-contained HTML (or other) code.

    Read the article

  • Can't open Software Center or Update Manager

    - by Albert
    When I try to (not from terminal) the following error message opens up: An unresolvable problem occurred while initializing the package information. Please report this bug against the 'update-manager' package and include the following error message: 'E:Malformed line 60 in source list /etc/apt/sources.list (dist parse)' Using Ubuntu 12.04, thanks. Edit: Went to line 60, this is what I found: deb http://archive.canonical.com/precise partner

    Read the article

  • DDD9 - voting now open for the UK's premier community event

    - by Liam Westley
    If you are interested in software development including a heap of great open source frameworks, then get over to the DDD9 web site and vote for some sessions for the next DDD conference.  It will take place at Microsoft's UK headquarters in Reading on 29th January 2011.     http://developerdeveloperdeveloper.com/ddd9/ProposedSessions.aspx I've proposed a session on the new Async CTP announced at PDC, but there's loads more interesting stuff such as Ruby, CQRS and jQuery Mobile, so get your votes in now so it's the content you want to see.

    Read the article

  • Rebuilding CoasterBuzz, Part IV: Dependency injection, it's what's for breakfast

    - by Jeff
    (Repost from my personal blog.) This is another post in a series about rebuilding one of my Web sites, which has been around for 12 years. I hope to relaunch soon. More: Part I: Evolution, and death to WCF Part II: Hot data objects Part III: The architecture using the "Web stack of love" If anything generally good for the craft has come out of the rise of ASP.NET MVC, it's that people are more likely to use dependency injection, and loosely couple the pieces parts of their applications. A lot of the emphasis on coding this way has been to facilitate unit testing, and that's awesome. Unit testing makes me feel a lot less like a hack, and a lot more confident in what I'm doing. Dependency injection is pretty straight forward. It says, "Given an instance of this class, I need instances of other classes, defined not by their concrete implementations, but their interfaces." Probably the first place a developer exercises this in when having a class talk to some kind of data repository. For a very simple example, pretend the FooService has to get some Foo. It looks like this: public class FooService {    public FooService(IFooRepository fooRepo)    {       _fooRepo = fooRepo;    }    private readonly IFooRepository _fooRepo;    public Foo GetMeFoo()    {       return _fooRepo.FooFromDatabase();    } } When we need the FooService, we ask the dependency container to get it for us. It says, "You'll need an IFooRepository in that, so let me see what that's mapped to, and put it in there for you." Why is this good for you? It's good because your FooService doesn't know or care about how you get some foo. You can stub out what the methods and properties on a fake IFooRepository might return, and test just the FooService. I don't want to get too far into unit testing, but it's the most commonly cited reason to use DI containers in MVC. What I wanted to mention is how there's another benefit in a project like mine, where I have to glue together a bunch of stuff. For example, when I have someone sign up for a new account on CoasterBuzz, I'm actually using POP Forums' new account mailer, which composes a bunch of text that includes a link to verify your account. The thing is, I want to use custom text and some other logic that's specific to CoasterBuzz. To accomplish this, I make a new class that inherits from the forum's NewAccountMailer, and override some stuff. Easy enough. Then I use Ninject, the DI container I'm using, to unbind the forum's implementation, and substitute my own. Ninject uses something called a NinjectModule to bind interfaces to concrete implementations. The forum has its own module, and then the CoasterBuzz module is loaded second. The CB module has two lines of code to swap out the mailer implementation: Unbind<PopForums.Email.INewAccountMailer>(); Bind<PopForums.Email.INewAccountMailer>().To<CbNewAccountMailer>(); Piece of cake! Now, when code asks the DI container for an INewAccountMailer, it gets my custom implementation instead. This is a lot easier to deal with than some of the alternatives. I could do some copy-paste, but then I'm not using well-tested code from the forum. I could write stuff from scratch, but then I'm throwing away a bunch of logic I've already written (in this case, stuff around e-mail, e-mail settings, mail delivery failures). There are other places where the DI container comes in handy. For example, CoasterBuzz does a number of custom things with user profiles, and special content for paid members. It uses the forum as the core piece to managing users, so I can ask the container to get me instances of classes that do user lookups, for example, and have zero care about how the forum handles database calls, configuration, etc. What a great world to live in, compared to ten years ago. Sure, the primary interest in DI is around the "separation of concerns" and facilitating unit testing, but as your library grows and you use more open source, it starts to be the glue that pulls everything together.

    Read the article

  • RDLC - Adding a Data Source in VS2010

    - by Kezzer
    Greetings. I have an RDLC file and am wanting to add a data source to it, although without any luck so far. The data source is a custom class written by myself (just to add, we do this all the time). We recently converted over to the VS2010 RDLC format which caused some problems, but we've made some changes to our implementation that workaround the more major issues. So, back to the issue at hand, when I attempt to add my data source to the DummyDataSource list in the RDLC view in VS2010 it just does nothing, however it does add the data source to the list of data sources, but you can't select it from the drop-down list in the RDLC view which means I can't add the data source at all. Has anyone come across this problem? Is there anything I need to check? I've searched with fervour and had no luck.

    Read the article

  • Problem using in GWT project classes from other project/source folders

    - by voipsecuritydigest.com
    My project contains 2 source folder, one is generic J2EE application another is smartCleintGWT, I want to use some already existing DTO classes from first source folder (src) Note that class used on client side and on server side of GWT project! When I do that I getting error [ERROR] Errors in 'file:/C:/..Projects/Admin/DMX/src_console/com/ho/nod/client/AdminRPC.java' [ERROR] Line 7: No source code is available for type com.dmx.synch.server.descriptors.DMXLicense; did you forget to inherit a required module? Source is available obviously; is there any way to import all that into GWT? PS In the future 2 source folder will be separated into 2 projects...I hope it wont be that complicated as well.

    Read the article

< Previous Page | 66 67 68 69 70 71 72 73 74 75 76 77  | Next Page >