Search Results

Search found 700 results on 28 pages for 'subscription'.

Page 24/28 | < Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >

  • What good technology podcasts are out there?

    - by Michael Stum
    Yes, Podcasts, those nice little Audiobooks I can listen to on the way to work. With the current amount of Podcasts, it's like searching a needle in a haystack, except that the haystack happens to be the Internet and is filled with too many of these "Hot new Gadgets" stuff :( Now, even though I am mainly a .NET developer nowadays, maybe anyone knows some good Podcasts from people regarding the whole software lifecycle? Unit Testing, Continous Integration, Documentation, Deployment... So - what are you guys and gals listening to? Please note that the categorizations are somewhat subjective and may not be 100% accurate as many podcasts cover several areas. Categorization is made against what is considered the "main" area. General Software Engineering / Productivity Stack Overflow TekPub (Requires Paid Subscription) SE Radio 43 Folders Perspectives Dr. Dobb's (now a video feed) The Pragmatic Podcast (Inactive) IT Matters Agile Toolkit Podcast The Stack Trace (Inactive) Parleys Techzing The Startup Success Podcast Berkeley CS class lectures FOSS Weekly .NET / Visual Studio / Microsoft Herding Code Hanselminutes .NET Rocks! Deep Fried Bytes Alt.Net Podcast Polymorphic Podcast Sparkling Client (The Silverlight Podcast) dnrTV! Spaghetti Code ASP.NET Podcast Channel 9 Radio TFS PowerScripting Podcast The Thirsty Developer Elegant Code ConnectedShow Crafty Coders Coding QA jQuery yayQuery The official jQuery podcast Java / Groovy The Java Posse Grails Podcast Java Technology Insider Ruby / Rails Railscasts Rails Envy The Ruby on Rails Podcast Rubiverse Web Design / JavaScript / Ajax WebDevRadio Boagworld The Rissington podcast Ajaxian YUI Theater Unix / Linux / Mac / iPhone Mac Developer Network Hacker Public Radio Linux Outlaws Mac OS Ken LugRadio Linux radio show (Inactive) The Linux Action Show! Linux Kernel Mailing List (LKML) Summary Podcast Stanford's iPhone programming class SysAdmin, Security or Infrastructure RunAs Radio Security Now! Crypto-Gram Security Podcast Hak5 VMWare VMTN Windows Weekly PaulDotCom Security The Register - Semi-Coherent Computing FeatherCast General Tech / Business Tekzilla This Week in Tech The Guardian Tech Weekly PCMag Radio Podcast Entrepreneurship Corner Manager Tools Other / Misc. / Podcast Networks IT Conversations Retrobits Podcast No Agenda Netcast Cranky Geeks The Command Line Freelance Radio IBM developerWorks The Register - Open Season Drunk and Retired Technometria Sod This Radio4Nerds Hacker Medley

    Read the article

  • Copy protection and licensing tools.

    - by Skittles
    I'm new to stackoverflow.com after hearing about it from Jon Skeet on DotNetRocks.This seems like the perfect place to ask this question. I am in the middle of trying to find a 3rd party Copy protection and licensing tool. The company that I work with have 4 products that need to be protected. We want to supply a Trail license (with extensions). A single user license and a floating license (where the client purchases a number to run over a network). We also want to be able to supply both the Single and Floating license as a subscription license. I have trialled DeployLX and although it seems to give everything that we need, and they are quick to answer emails, their documentation is truly awful with NO examples of how to achieve results. Has anyone any experience with DeployLX and if so, would you recommend it? Could you point me in the direction to find some real help on it? Finally, would anyone have any recommendations of a 3rd party licensing tool to use for very quick development. Thank you so much,

    Read the article

  • Replication: SQL Server 2008 Publisher with SQL Server Express 2005 Subscriber

    - by Jeremy
    Here is the setup: SQL Server 2008 Enterprise Server with a Merge Publication. SQL Server 2005 Express with pull subscription. There is no web or ftp setup. This is direct merge replication. Using the RMO objects from C#, I get a "class cannot be found." COM Error when accessing the MergePullSubscription.SynchronizationAgent property. I've tried with both the 2008 RMO dll's (version 10 dll's) and the 2005 RMO dll's (version 9 dll's). When trying to use replmerge.exe, I get the following: 2010-04-10 04:12:05.263 Microsoft SQL Server Merge Agent 9.00.1399.06 2010-04-10 04:12:05.294 Copyright (c) 2000 Microsoft Corporation 2010-04-10 04:12:05.294 2010-04-10 04:12:05.294 The timestamps prepended to the output lines are express ed in terms of UTC time. 2010-04-10 04:12:05.294 User-specified agent parameter values: -Publisher SUN -PublisherDB PRIMROSE -PublisherSecurityMode 1 -Publication PRIMROSE -Distributor SUN -DistributorSecurityMode 1 -Subscriber PVILLE\SQLEXPRESS -SubscriberSecurityMode 1 -SubscriberDB PRIMROSE -SubscriptionType 1 -DistributorLogin sa -DistributorPassword ********** -DistributorSecurityMode 0 -PublisherLogin sa -PublisherPassword ********** -PublisherSecurityMode 0 -SubscriberLogin sa -SubscriberPassword ********** -SubscriberSecurityMode 0 2010-04-10 04:12:05.325 Connecting to Subscriber 'PVILLE\SQLEXPRESS' 2010-04-10 04:12:05.481 Connecting to Distributor 'SUN' 2010-04-10 04:12:05.513 The version of SQL Server running at the Distributor(10. 0.2531.??????????????????) is not compatible with the version of SQL Server runn ing at the Subscriber(9.00.1399.???????L?L?LHL?L?L?L?,?). 2010-04-10 04:12:05.513 Category:NULL Source: Merge Process Number: -2147200979 Message: The version of SQL Server running at the Distributor(10.0.2531.???????? ??????????) is not compatible with the version of SQL Server running at the Subs criber(9.00.1399.???????L?L?LHL?L?L?L?,?). Any ideas?

    Read the article

  • Why don't RSpec's methods, "get", "post", "put", "delete" work in a controller spec in a gem (or out

    - by ramon.tayag
    I'm not new to Rails or Rspec, but I'm new to making gems. When I test my controllers, the REST methods "get", "post", "put", "delete" give me an undefined method error. Below you'll find code, but if you prefer to see it in a pastie, click here. Thanks! Here's my spec_helper: $LOAD_PATH.unshift(File.dirname(__FILE__)) $LOAD_PATH.unshift(File.join(File.dirname(__FILE__), '..', 'lib')) require 'rubygems' require 'active_support' unless defined? ActiveSupport # Need this so that mattr_accessor will work in Subscriber module require 'active_record/acts/subscribable' require 'active_record/acts/subscriber' require 'action_view' require 'action_controller' # Since we'll be testing subscriptions controller #require 'action_controller/test_process' require 'spec' require 'spec/autorun' # Need active_support to user mattr_accessor in Subscriber module, and to set the following inflection ActiveSupport::Inflector.inflections do |inflect| inflect.irregular 'dorkus', 'dorkuses' end require 'active_record' # Since we'll be testing a User model which will be available in the app # Tell active record to load the subscribable files ActiveRecord::Base.send(:include, ActiveRecord::Acts::Subscribable) ActiveRecord::Base.send(:include, ActiveRecord::Acts::Subscriber) require 'app/models/user' # The user model we expect in the application require 'app/models/person' require 'app/models/subscription' require 'app/models/dorkus' require 'app/controllers/subscriptions_controller' # The controller we're testing #... more but I think irrelevant My subscriptions_spec: require File.expand_path(File.dirname(__FILE__) + '/../spec_helper') describe SubscriptionsController, "on GET index" do load_schema describe ", when only subscribable params are passed" do it "should list all the subscriptions of the subscribable object" end describe ", when only subscriber params are passed" do it "should list all the subscriptions of the subscriber" do u = User.create d1 = Dorkus.create d2 = Dorkus.create d1.subscribe! u d2.subscribe! u get :index, {:subscriber_type = "User", :subscriber_id = u.id} assigns[:subscriptions].should == u.subscriptions end end end My subscriptions controller: class SubscriptionsController The error: NoMethodError in 'SubscriptionsController on GET index , when only subscriber params are passed should list all the subscriptions of the subscriber' undefined method `get' for # /home/ramon/rails/acts_as_subscribable/spec/controllers/subscriptions_controller_spec.rb:21:

    Read the article

  • parsed xml file: skip creation if blank?

    - by GoodGets
    This could be a HappyMapper specific question, but I don't think so. In my app, users can upload their blog subscriptions (via an OPML file), which I parse and add to their profile. The only problem is during the parsing, or more specifically the creation of each subscription, I can't figure out how to skip over entries that are just "labels". Since OPML files allow you to label your blogs, or organize them into folders, this is my problem. The actual blog subscriptions and their labels both have "outline" tags. <outline text="Rails" > <outline title="Katz Got Your Tongue?" text="Katz Got Your Tongue?" htmlUrl="http://yehudakatz.com" type="rss" xmlUrl="http://feeds.feedburner.com/KatzGotYourTongue" /> After parsing, I create each feed via a method call inside of the HappyMapper module def create_feed Feed.new( :feed_htmlUrl => self.htmlUrl, :feed_title => self.title, ... But how do I prevent it from creating new "feeds" for those outline tags that are just tags? (i.e. those that don't have an htmlUrl?)

    Read the article

  • mercurial .hgrc notify hook

    - by Eeyore
    Could someone tell me what is incorrect in my .hgrc configuration? I am trying to use gmail to send a e-mail after each push and/or commit. .hgrc [paths] default = ssh://www.domain.com/repo/hg [ui] username = intern <[email protected]> ssh="C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [extensions] hgext.notify = [hooks] changegroup.notify = python:hgext.notify.hook incoming.notify = python:hgext.notify.hook [email] from = [email protected] [smtp] host = smtp.gmail.com username = [email protected] password = sure port = 587 tls = true [web] baseurl = http://dev/... [notify] sources = serve push pull bundle test = False config = /path/to/subscription/file template = \ndetails: {baseurl}{webroot}/rev/{node|short}\nchangeset: {rev}:{node|short}\nuser: {author}\ndate: {date|date}\ndescription:\n{desc}\n maxdiff = 300 Error Incoming comand failed for P/project. running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg! , error code: -1 running ""C:\Program Files (x86)\Mercurial\plink.exe" -ssh -i "C:\Program Files (x86)\Mercurial\key.pub" [email protected] "hg -R repo/hg serve --stdio"" sending hello command sending between command remote: FATAL ERROR: Server unexpectedly closed network connection abort: no suitable response from remote hg!

    Read the article

  • Test Ruby-on-Rails controller with RSpec and different route name

    - by jhwist
    I have a Rails model named Xpmodule with a corresponding controller XpmoduleController. class XpmoduleController < ApplicationController def index @xpmodule = Xpmodule.find(params[:module_id]) end def subscribe flash[:notice] = "You are now subscribed to #{params[:subscription][:title]}" redirect_to :action => :index end end The original intent was to name the model Module which for obvious reasons doesn't work. However I still want to have the URLs look like /module/4711/ therefore I added this to my routes.rb: map.connect '/module/:module_id', :controller => 'xpmodule', :action => 'index' map.connect '/module/:module_id/subscribe', :controller => 'xpmodule', :action => 'subscribe' Now I want to test this controller with Rspec: describe XpmoduleController do fixtures :xpmodules context "index" do it "should assign the current xpmodule" do xpm = mock_model(Xpmodule) Xpmodule.should_receive(:find).and_return(xpm) get "index" assigns[:xpmodule].should be_an_instance_of(Xpmodule) end end end for which I get No route matches {:action=>"index", :controller=>"xpmodule"}. Which of course is sort-of right, but I don't want to add this route just for testing purposes. Is there a way to tell Rspec to call a different URL in get?

    Read the article

  • What off-the-shelf licensing system will meet my needs?

    - by Anders Pedersen
    I'm looking for an off-the-shelf license system for desktop software. After some research on the net -- and of course here on StackOverflow -- I haven't found one the suits our needs. I have a couple of must-have features and some would-be-nice features: Must have: Encrypted unlock key Possibility to automate the unlock key generation on my website User info in key so that I can show name and company in an about box and perhaps in reports Nice to have: License managing tools Online activation Nice upgrade possibilities to a version with concurrent license model and subscription model I have looked at Manco, but I find them difficult to work with and the documentation is minimal. Further, I couldn't get the name in the key. Also, the automatic generation of a key on my website has to be done with an application web service, but I would rather program against a DLL. Next I looked at xheo. It is easier to use and the documentation is better, but the price is substantially higher and here you can only get the user name in the license file that you then have to provide together with the unlock key. Could anyone share their experiences on what you are using and how it is working for you?

    Read the article

  • E-commerce platform for custom application integration

    - by Zach Smith
    We are building an online subscription-based website and I'm looking for recommendations on which e-commerce platform to use for the checkout process. Requirements include: Only four products. The sign-up process of the site is heavily customized, and after checkout the user should automatically get logged into the subscriber area. Subscriptions will last one year and can be renewed manually. Support for coupons/discount codes at a later point. Since the entire application is custom, we've weighed building a custom checkout but are strongly leaning towards using existing software to avoid having to build lots of admin reporting as well as a coupon engine down the road. The two questions we're pondering are: Is it better to a) build our application custom and use whatever e-commerce software we select just for the payment piece, or b) use the e-commerce software as the basis and build our application around it/as a module/etc? Which e-commerce platform should we use? I've looked into a variety of off the shelf e-commerce software, but it's not clear to me which would be easiest to integrate with. I've researched on the Web and looked at many of the threads on SO to compiled a list of potential candidates: www.magentocommerce.com/ (seems difficult to integrate with) www.prestashop.com/ www.nopcommerce.com/ www.opencart.com/ www.cubecart.com/ www.spreecommerce.com/ www.interspire.com/ www.tradingeye.com/ We're most concerned with the level of effort required with ramping up on the software and then doing the integration with our custom functionality. We're most proficient with PHP, ASP.NET and some ROR and are only considering those technologies. We prefer open source, but would be open to commercial if there's a significant upside. Any experiences with similar projects and advice is greatly appreciated.

    Read the article

  • Setting article properties for a publication using RMO in C# .NET

    - by Pavan Kumar
    I am using transaction replication with push subscription. I am developing a UI for replication using RMO in C#.NET between different instances of the same database within same machine holding similar schema and structure. I am using Single subscriber and multiple publisher topology. During creation of publication i want to set a few article properties such as Keep the existing object unchanged ,allow schema changes at subscriber to false a,copy foriegn key constarint and copy check constraints to true. How do i set the article properties using RMO in C# .NET. I am using Visual Studio 2008 SP1.I also want to know as how we can select all the objects including Tables,Views,Stored Procedures for publishing at one stretch. I could do it for one table but i want to select all the tables at one stretch. This is the code snippet i used for selecting single table for publishing. TransArticle ta = new TransArticle(); ta.Name = "Article_1"; ta.PublicationName = "TransReplication_DB2"; ta.DatabaseName = "DB2"; ta.SourceObjectName = "person"; ta.SourceObjectOwner = "dbo"; ta.ConnectionContext = conn; ta.Create();

    Read the article

  • Running Magento for multiple clients - single Installaton vs. multiple installations

    - by Chris Hopkins
    Hi There I am looking to set-up a Magento (Community Edition) installation for multiple clients and have researched the matter for a few days now. I can see that the Enterprise Edition has what I need in it, but surprisingly I am not willing to shell out the $12,000 odd yearly subscription. It seems there are a few options available to be but I am worried about the performance I will get out of the various options. Option 1) Single install using AITOC advanced permissions module So this is really what I am after; one installation so that I can update my core files all at the same time and also manage all my store users from one place. The problems here are that I don't know anything about the reliability of this extra product and that I have to pay a bit extra. I am also worried that if I have 10 stores running off this one installation it might all slow down so much and keel over as I have heard allot about Magento's slowness. Module Link: http://www.aitoc.com/en/magentomods_advanced_permissions.html Option 2) Multiple installations of Magento on one server for each shop So here I have 10 Magento installations on one server all running happily away not using any extra money, but I now have 10 separate stores to update and maintain which could be annoying. Also I haven't been able to find a whole lot of other people using this method and when I have they are usually asking how to stop their servers from dying. So this route seems like it could be even worse on my server as I will have more going on on my server but if my server could take it each Magento installation would be simpler and less likely to slow down due to each one having to run 10 shops on its own? Option 3) Use lots of servers and lots of Magento installations I just so do not want to do this. Option 4) Buy Magento Enterprise I do not have the money to do this. So which route is less likely to blow up my server? And does anyone have experience with this holy grail of a module? Thanks for reading and thanks in advance for any help - Chris Hopkins

    Read the article

  • Optimized Publish/Subcribe JMS Broker Cluster and Conflicting Posts on StackOverFlow for the Answer

    - by Gene
    Hi, I am looking to build a publish/subscribe distributed messaging framework that can manage huge volumes of message traffic with some intelligence at the broker level. I don't know if there's a topology that describes this, but this is the model I'm going after: EXAMPLE MODEL A A) There are two running message brokers (ideally all on localhost if possible, for easier demo-ing) : Broker-A Broker-B B) Each broker will have 2 listeners and 1 publisher. Example Figure [subscriber A1, subscriber A2, publisher A1] <-- BrokerA <-- BrokerB <-- [publisher B1, subscriber B1, subscriber B2] IF a message-X is published to broker A and there no subscribers for it among the listeners on Broker-B (via criteria in Message Selectors or Broker routing rules), then that message-X will never be published to Broker-B. ELSE, broker A will publish the message to broker B, where one of the broker B listeners/subscribers/services is expecting that message based on the subscription criteria. Is Clustering the Correct Approach? At first, I concluded that the "Broker Clustering" concept is what I needed to support this. However, as I have come to understand it, the typical use of clustering entails either: message redundancy across all brokers ... or Competing Consumers pattern ... and neither of these satisfy the requirement in the EXAMPLE MODEL A. What is the Correct Approach? My question is, does anyone know of a JMS implementation that supports the model I described? I scanned through all the stackoverflow post titles for the search: JMS and Cluster. I found these two informative, but seemingly conflicting posts: Says the EXAMPLE MODEL A is/should-be implicitly supported: http://stackoverflow.com/questions/2255816/jms-consumer-with-activemq-network-of-brokers " this means you pick a broker, connect to it, and let the broker network sort it out amongst themselves. In theory." Says the EXAMPLE MODEL A IS NOT suported: http://stackoverflow.com/questions/2017520/how-does-a-jms-topic-subscriber-in-a-clustered-application-server-recieve-message "All the instances of PropertiesSubscriber running on different app servers WILL get that message." Any suggestions would be greatly appreciated. Thanks very much for reading my post, Gene

    Read the article

  • How you would you describe the Observer pattern in beginner language?

    - by Sheldon
    Currently, my level of understanding is below all the coding examples on the web about the Observer Pattern. I understand it simply as being almost a subscription that updates all other events when a change is made that the delegate registers. However, I'm very unstable in my true comprehension of the benefits and uses. I've done some googling, but most are above my level of understanding. I'm trying to implement this pattern with my current homework assignment, and to truly make sense on my project need a better understanding of the pattern itself and perhaps an example to see what its use. I don't want to force this pattern into something just to submit, I need to understand the purpose and develop my methods accordingly so that it actually serves a good purpose. My text doesn't really go into it, just mentions it in one sentence. MSDN was hard for me to understand, as I'm a beginner on this, and it seems more of an advanced topic. How would you describe this Observer pattern and its uses in C# to a beginner? For an example, please keep code very simple so I can understand the purpose more than complex code snippets. I'm trying to use it effectively with some simple textbox string manipulations and using delegates for my assignment, so a pointer would help!

    Read the article

  • Why does ActiveMQ hold messages that should be deleted from Topic?

    - by rauch
    I use ActiveMQ as Notification System(Pub/Sub model). On server: if any changes of data occur, Server send this updated data (File) to Topic using BlobMessages. There are few Clients, that subscribe on this Topic and get updated File if it exsist in Topic. The problem is that all of BlobMessages, that were sent to Topic, are hold by ActiveMQ all time. this.producer = new ProducerTool.Builder("tcp://localhost:61616?jms.blobTransferPolicy.uploadUrl=http://localhost:8161/fileserver/", "ServerProdTopic").topic(true) .transacted(false).durable(false).timeToLive(10000L).build(); this.consumer = new ConsumerTool.Builder("tcp://localhost:61616", "ServerConsTopic").topic(true) .transacted(false).durable(false).build(); consumer.setMessageListener(this); The File is sent: connection = createConnection(); session = createSession(connection); producer = createProducer(session); BlobMessage blobMsg = ((ActiveMQSession) session).createBlobMessage(resource); blobMsg.setStringProperty("sourceName", resource.getName()); producer.send(blobMsg); if (transacted) { System.out.println("Producer Committing..."); session.commit(); } Where createProducer is: protected Connection createConnection() throws JMSException, Exception { ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory(url); //connectionFactory.getBlobTransferPolicy().setUploadUrl("http://localhost:8161/fileserver/"); Connection connection = connectionFactory.createConnection(); connection.start(); ((ActiveMQConnection) connection).setCopyMessageOnSend(false); return connection; } All, that could be useful I set as need: Session.AUTO_ACKNOWLEDGE; Non-Durable Subscription; TimeToLive = 9000; JMSDeliveryMode = Non-Persistent; What I have at runtime: in ActiveMQ directory: ~/apache-activemq-5.3.0/webapps/fileserver/ tere are all File, that where delivered and not delivered to Subscribers. Why? Sometimes Server send Big Files about 1 GB....And even this Files are hold at that directory, Even after stopping Subscribers(Clients), Publisher(Server) and ActiveMQ Broker.

    Read the article

  • How to rename an existing Grails application

    - by Johan Pelgrim
    Hi there, Does anybody know how to (easily) "rename" an existing grails application? I'm running into this because my PaaS provider does not allow me to delete a subscription... So I want to deploy my application under a different name. Of course, I can do this manually, but I think it might be a useful 'top-level' script (i.e. "grails rename-app newappname") Manual hints: When I do a "grails create-app myappname" I can see the myappname exists in the following files (and filenames)... Of course this is done by the create-app script, which replaces @...@ tokens in the template. I guess once they are replaced, it's not trivial to do a rename. ./.project: <name>myappname</name> ./application.properties:app.name=myappname ./build.xml:<project xmlns:ivy="antlib:org.apache.ivy.ant" name="myappname" default="test"> ./ivy.xml: <info organisation="org.example" module="myappname"/> ./myappname-test.launch:<stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="myappname"/> ./myappname.launch:<listEntry value="/myappname"/> ./myappname.launch:<listEntry value="<?xml version="1.0" encoding="UTF-8"?> <runtimeClasspathEntry containerPath="org.eclipse.jdt.launching.JRE_CONTAINER" javaProject="myappname" path="1" type="4"/> "/> ./myappname.launch:<stringAttribute key="org.eclipse.jdt.launching.PROJECT_ATTR" value="myappname"/> ./myappname.launch:<stringAttribute key="org.eclipse.jdt.launching.VM_ARGUMENTS" value="-Dbase.dir="${project_loc:myappname}" -Dserver.port=8080 -Dgrails.env=development"/> ./myappname.tmproj: <string>myappname.launch</string> And of course... the top-level directory name is "myappname" Any hints, or information about ongoing initiatives in this area are welcome Greetz, Johan

    Read the article

  • Best Practice - Removing item from generic collection in C#

    - by Matt Davis
    I'm using C# in Visual Studio 2008 with .NET 3.5. I have a generic dictionary that maps types of events to a generic list of subscribers. A subscriber can be subscribed to more than one event. private static Dictionary<EventType, List<ISubscriber>> _subscriptions; To remove a subscriber from the subscription list, I can use either of these two options. Option 1: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { if (_subscriptions[event].Contains(subscriber)) { _subscriptions[event].Remove(subscriber); } } Option 2: ISubscriber subscriber; // defined elsewhere foreach (EventType event in _subscriptions.Keys) { _subscriptions[event].Remove(subscriber); } I have two questions. First, notice that Option 1 checks for existence before removing the item, while Option 2 uses a brute force removal since Remove() does not throw an exception. Of these two, which is the preferred, "best-practice" way to do this? Second, is there another, "cleaner," more elegant way to do this, perhaps with a lambda expression or using a LINQ extension? I'm still getting acclimated to these two features. Thanks. EDIT Just to clarify, I realize that the choice between Options 1 and 2 is a choice of speed (Option 2) versus maintainability (Option 1). In this particular case, I'm not necessarily trying to optimize the code, although that is certainly a worthy consideration. What I'm trying to understand is if there is a generally well-established practice for doing this. If not, which option would you use in your own code?

    Read the article

  • jquery ajax call returning error when its not an error

    - by azz0r
    Hello, My JS Is: $(InitFavorite); function InitFavorite(){ var jList = $(".favourite_link"); var ids_to_check = {};//new Array(); $.each(jList, function () { var id = this.id; var object = id.split("_"); if (!ids_to_check[object[1]]) { ids_to_check[object[1]] = []; } ids_to_check[object[1]].push(object[0]); }); //console.log(ids_to_check); $.ajax({ type: 'POST', url: '/user/subscription/favourite-listing', data: ids_to_check, dataType: 'json', beforeSend: function(x) { if(x && x.overrideMimeType) { x.overrideMimeType("application/j-son;charset=UTF-8"); } }, error: function() { alert(1); }, success: function() { alert(2); /*$each(returned_values, function() { alert('boom'); });*/ } }); } From the ajax call, the following data is returned: {"env":"development","loggedIn":true,"translate":{}}{"Playlist":{"10":"Stop Recieving Updates For This Playlist"},"Clip":{"26":"Recieve Updates For This Clip","27":"Recieve Updates For This Clip","28":"Recieve Updates For This Clip","29":"Stop Recieving Updates For This Clip","30":"Recieve Updates For This Clip"}} However, success is never triggered, just error, despite there being no header and json being put out as the header (via zend framework). Ideas?

    Read the article

  • rake task via cron problem loading rubygems

    - by Matenia Rossides
    I have managed to get a cron job to run a rake task by doing the following: cd /home/myusername/approotlocation/ && /usr/bin/rake sendnewsletter RAILS_ENV=development i have checked with which ruby and which rake to make sure the paths are correct (from bash) the job looks like it wants to run as i get the following email from the cron daemon when it completes Missing these required gems: chronic whenever searchlogic adzap-ar_mailer twitter gdata bitly ruby-recaptcha You're running: ruby 1.8.7.22 at /usr/bin/ruby rubygems 1.3.5 at /home/myusername/gems, /usr/lib/ruby/gems/1.8 Run `rake gems:install` to install the missing gems. (in /home/myusername/approotlocation) my custom rake file within lib/tasks is as follows: task :sendnewsletter => :environment do require 'rubygems' require 'chronic' require 'whenever' require 'searchlogic' require 'adzap-ar_mailer' require 'twitter' require 'gdata' require 'bitly' require 'ruby-recaptcha' @recipients = Subscription.all(:conditions => {:active => true}) for user in @recipients Email.send_later(:deliver_send_newsletter,user) end end with or without the require items, it still gives me the same error ... can anyone shed some light on this? or alternatively advise me on how to make a custom file within the script directory that will run this function (I already have a cron job working that will run and process all my delayed_jobs. Cheers!

    Read the article

  • Copy SQL Server data from one server to another on a schedule

    - by rwmnau
    I have a pair of SQL Servers at different webhosts, and I'm looking for a way to periodically update the one server using the other. Here's what I'm looking for: As automated as possible - ideally, without any involvement on my part once it's set up. Pushes a number of databases, in their entirely (including any schema changes) from one server to the other Freely allows changes on the source server without breaking my process. For this reason, I don't want to use replication, as I'd have to break it every time there's an update on the source, and then recreate the publication and subscription One database is about 4GB in size and contains binary data. I'm not sure if there's a way to export this to a script, but it would be a mammoth file if I did. Originally, I was thinking of writing something that takes a scheduled full backup of each database, FTPs the backups from one server to the other once they're done, and then the new server picks it up and restores it. The only downside I can see to this is that there's no way to know that the backups are done before starting to transfer them - can these backups be done synchronously? Also, the server being refreshes is our test server, so if there's some downtime involved in moving the data, that's fine. Does anybody out there have a better idea, or is what I'm currently considering the best non-replication way to go? Thanks for your help, everybody. UPDATE: I ended up designing a custom solution to get this done using BAT files, 7Zip,command line FTP, and OSQL, so it runs in a completely automatic way and aggregates the data from a dozen servers across the country. I've detailed the steps in a blog entry. Thanks for all your input!

    Read the article

  • Limit iPhone in-app purchase by user's country

    - by Ryan
    Hello everyone. I'm a product manager who works for a small internet company that is developing an iPhone application for a social network. We monetize by offering limited and premium memberships to users (premium members get additional features not available to limited members). For billing on the web, we use a 3rd-party payment gateway that is nearing retirement, and will be replaced by an in-house solution. The business wants a global launch for our iPhone app using iTunes + in-app purchasing as a payment gateway. The problem with going global using this payment method is that for our web service membership level, available features, and subscription costs are defined by country. For example, in the US premium/limited memberships are available at 5 pricing tiers; in France premium/limited memberships are available at 5 different pricing tiers from the US; and in Chile the service is available for free and all features are available to users. Is it possible then to have the server-side, based on the user's country of registration, control the level of access, features, and payment options for users on the iPhone? I'd also note that since iTunes Connect does not allow variable pricing by currency and country, each "region" would need 5 in app purchase options. I argued for a US-only launch for iPhone using iTunes in app purchase until an in-house payment gateway is available. But you know...

    Read the article

  • Java Client .class File Protection

    - by Zac
    I am in the requirements phase of building a JEE application that will most likely run on a GlassFish/JBoss backend (doesn't matter for now). I know I shouldn't be thinking about architecture at requirements time, but one can't help but start to imagine how the components would all snap together :-) Here are some hard, non-flexible requirements on the client-side: (1) The client application will be a Swing box (2) The client is free to download, but will use a subscription model (thus requiring a login mechanism with server-side authentication/authorization, etc.) (3) Yes, Java is the best platform solution for the problem at hand for reasons outside the scope of this post (4) The client-side .class files need safeguarding against decompiling That last (4th) requirement is the basis of this post. I'm not really worried about someone actually decompiling and getting at my source code: in the end, it's just Swing controls driven by some lightweight business logic. I'm worried about a scenario where someone decompiles my code, modifies it to exploit/attack the server, re-compiles, and fires it up. I've envisioned all sorts of nasty solutions, but didn't know if this was a common problem with a common solution for JEE developers. Any thoughts? Not interested in "code obfuscation" techniques! Thanks for any input!

    Read the article

  • Using nohup mysqldump from php script is inserting a '!' and breaking to a new line.

    - by Aglystas
    I'm trying to run a mysqldump from php using the nohup command to prevent the script from hanging. Here's the command (The database is mc6_erik_test, everything else is just a table list until you get to the end) exec("mysqldump -u root -pPassword -h vfmy1-dev.mountainmedia.com mc6_erik_test access_log admin affiliate affiliate_2_product authorized_ip category category_2_product claim_code claim_code_log country_exclude customer customer_2_subscription customer_account_log customer_address customer_bill customer_discount customer_ip customer_key email_bulk_log email_draft email_queue email_queue_log email_template endicia_log gift_wrap image_bulk_upload log mailing_list manufacturer merchant merchant_checkout merchant_ip merchant_ship merchant_ship_conf new_account_temp order_dest order_item order_item_2_dest order_item_2_package order_item_log order_item_registrant order_note order_package order_package_label orders package package_2_product pref product product_2_supplier product_also product_event_date product_image product_option product_related product_review product_review_helpful product_ship_disable report search_log subscription supplier temp_product transaction_account transactions wish_list wish_list_fill wish_list_item --opt --where='merchant_id=\'6\'' /tmp/sync_db_card_20100519105358.sql"); As you can see it's really long, because I have to specifically include only the tables I want to dump. The command works great from the command line, however when I run it through a web script towards the end the following is being used as the command... supplier temp_product transaction_account transactio! ns wish_list wish_list_fill wish_list_item --opt --where='merchant_id="6"' > /tmp/sync_db_card_20100519105358.sql So the table 'transactions' is being split by an exclamation point and newline. The rest of the command is exactly the same. And if I run this through the php-cli interface it doesn't happen only when I try running it via the webserver using nohup. I'm wondering if there is some inherit string length to using the exec command within a php script, or really if anyone has any general idea what is going on here.

    Read the article

  • Submit button not focused even though tabindex is properly set

    - by Nicsoft
    Hello, I have defined tabindex for the input fields in a form. When tabbing through the input fields the submit button is never getting the focus, some other input fields in a different form on the page gets the focus. Those are all having tabindexes higher than 3. How come? <form action="subscription.php" name="subscribe" method="post" onsubmit="return isValidEmailAndEqual()"> <p id="formlabel">E-mail</p> <input type="text" name="email1" tabindex=1> <br/> <p id="formlabel">Repeat e-mail</p> <input type="text" name="email2" tabindex=2> <br/> <input id="inputsubmit" type="submit" value="Subscribe" tabindex=3> </form> .css: input { background-color : #333; border: 1px solid #EEE; color: #EEE; margin-bottom: 6px; margin-top: 4px; padding: 1px; width : 200px; } #inputsubmit { background-color : #d7e6f1; border: 1px solid #EEE; color: #0000ff; margin-bottom: 6px; margin-top: 4px; padding: 1px; width : 200px; } #inputsubmit:hover { cursor: pointer; cursor: hand; background-color : #d7e6f1; border: 1px solid #0000ff; color: #0000ff; margin-bottom: 6px; margin-top: 4px; padding: 1px; width : 200px; } p#formlabel{ width: 100; } Thanks in advance!

    Read the article

  • Subscribe through API .net C#

    - by Younes
    I have to submit subscription data to another website. I have got documentation on how to use this API however i'm not 100% sure of how to set this up. I do have all the information needed, like username / passwords etc. This is the API documentation: https://www.apiemail.net/api/documentation/?SID=4 How would my request / post / whatever look like in C# .net (vs 2008) when i'm trying to acces this API? This is what i have now, I think i'm not on the right track: public static string GArequestResponseHelper(string url, string token, string username, string password) { HttpWebRequest myRequest = (HttpWebRequest)WebRequest.Create(url); myRequest.Headers.Add("Username: " + username); myRequest.Headers.Add("Password: " + password); HttpWebResponse myResponse = (HttpWebResponse)myRequest.GetResponse(); Stream responseBody = myResponse.GetResponseStream(); Encoding encode = System.Text.Encoding.GetEncoding("utf-8"); StreamReader readStream = new StreamReader(responseBody, encode); //return string itself (easier to work with) return readStream.ReadToEnd(); Hope someone knows how to set this up properly. Thx!

    Read the article

  • Speeding up inner-joins and subqueries while restricting row size and table membership

    - by hiffy
    I'm developing an rss feed reader that uses a bayesian filter to filter out boring blog posts. The Stream table is meant to act as a FIFO buffer from which the webapp will consume 'entries'. I use it to store the temporary relationship between entries, users and bayesian filter classifications. After a user marks an entry as read, it will be added to the metadata table (so that a user isn't presented with material they have already read), and deleted from the stream table. Every three minutes, a background process will repopulate the Stream table with new entries (i.e. whenever the daemon adds new entries after the checks the rss feeds for updates). Problem: The query I came up with is hella slow. More importantly, the Stream table only needs to hold one hundred unread entries at a time; it'll reduce duplication, make processing faster and give me some flexibility with how I display the entries. The query (takes about 9 seconds on 3600 items with no indexes): insert into stream(entry_id, user_id) select entries.id, subscriptions_users.user_id from entries inner join subscriptions_users on subscriptions_users.subscription_id = entries.subscription_id where subscriptions_users.user_id = 1 and entries.id not in (select entry_id from metadata where metadata.user_id = 1) and entries.id not in (select entry_id from stream where user_id = 1); The query explained: insert into stream all of the entries from a user's subscription list (subscriptions_users) that the user has not read (i.e. do not exist in metadata) and which do not already exist in the stream. Attempted solution: adding limit 100 to the end speeds up the query considerably, but upon repeated executions will keep on adding a different set of 100 entries that do not already exist in the table (with each successful query taking longer and longer). This is close but not quite what I wanted to do. Does anyone have any advice (nosql?) or know a more efficient way of composing the query?

    Read the article

< Previous Page | 20 21 22 23 24 25 26 27 28  | Next Page >