Search Results

Search found 6460 results on 259 pages for 'spam filter'.

Page 82/259 | < Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >

  • What algorithms are suitable for this simple machine learning problem?

    - by user213060
    I have a what I think is a simple machine learning question. Here is the basic problem: I am repeatedly given a new object and a list of descriptions about the object. For example: new_object: 'bob' new_object_descriptions: ['tall','old','funny']. I then have to use some kind of machine learning to find previously handled objects that had similar descriptions, for example, past_similar_objects: ['frank','steve','joe']. Next, I have an algorithm that can directly measure whether these objects are indeed similar to bob, for example, correct_objects: ['steve','joe']. The classifier is then given this feedback training of successful matches. Then this loop repeats with a new object. a Here's the pseudo-code: Classifier=new_classifier() while True: new_object,new_object_descriptions = get_new_object_and_descriptions() past_similar_objects = Classifier.classify(new_object,new_object_descriptions) correct_objects = calc_successful_matches(new_object,past_similar_objects) Classifier.train_successful_matches(object,correct_objects) But, there are some stipulations that may limit what classifier can be used: There will be millions of objects put into this classifier so classification and training needs to scale well to millions of object types and still be fast. I believe this disqualifies something like a spam classifier that is optimal for just two types: spam or not spam. (Update: I could probably narrow this to thousands of objects instead of millions, if that is a problem.) Again, I prefer speed when millions of objects are being classified, over accuracy. What are decent, fast machine learning algorithms for this purpose?

    Read the article

  • Extracting email addresses in an html block in ruby/rails

    - by corroded
    I am creating a parser that wards off against spamming and harvesting of emails from a block of text that comes from tinyMCE (so it may or may not have html tags in it) I've tried regexes and so far this has been successful: /\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b/i problem is, i need to ignore all email addresses with mailto hrefs. for example: <a href="mailto:[email protected]">[email protected]</a> should only return the second email add. To get a background of what im doing, im reversing the email addresses in a block so the above example would look like this: <a href="mailto:[email protected]">moc.liam@tset</a> problem with my current regex is that it also replaces the one in href. Is there a way for me to do this with a single regex? Or do i have to check for one then the other? Is there a way for me to do this just by using gsub or do I have to use some nokogiri/hpricot magicks and whatnot to parse the mailtos? Thanks in advance! Here were my references btw: so.com/questions/504860/extract-email-addresses-from-a-block-of-text so.com/questions/1376149/regexp-for-extracting-a-mailto-address im also testing using this: http://rubular.com/ edit here's my current helper code: def email_obfuscator(text) text.gsub(/\b[A-Z0-9._%+-]+@[A-Z0-9.-]+\.[A-Z]{2,4}\b/i) { |m| m = "<span class='anti-spam'>#{m.reverse}</span>" } end which results in this: <a target="_self" href="mailto:<span class='anti-spam'>moc.liamg@tset</span>"><span class="anti-spam">moc.liamg@tset</span></a>

    Read the article

  • PEAR mail not sending to .eu email addresses

    - by andy-score
    I have a PEAR mailing script that is used to send newsletters from a clients website. I've used the same code before to produce another newsletter system and it has worked well and been used to send emails to various addresses, however our latest client has email addresses ending .eu and this seems to cause a problem. When the newsletter is sent from the site to the various subscribers, including gmail, hotmail, yahoo and our own company emails, the emails are received correctly by all but the clients email addresses, the ones ending in .eu. As there is nothing different between their mailing system and our own, which is run from the same hosting company, I have to conclude that it is something to do with the domain name. The emails are being sent to the addresses from the system, as I have a log file storing the email addresses when the mail out function is called, but the newsletter never appears in the inbox. I have created a new email account for the domain and that too isn't receiving the emails. It's not going into a spam folder as the webmail system marks spam by adding SPAM into the subject. I've tried to log if there are any errors using the following foreach($subscribers as $recipient) { $send_newsletter = $mail->send($recipient, $headers, $body); // LOG INFO $message = $recipient; if($send_newsletter) { $message .= ' SENT'; } elseif(PEAR::isError($send_newsletter)) { $message .= ' ERROR: '.$send_newsletter->getMessage(); } $message .= ' | '; fwrite($log_file,$message); } However this simple returns SENT for all recipients, so in theory there isn't anything wrong with the mailing function. I don't know a great deal about PEAR or the mailing function so I may be missing something important, but I'd have thought seeing the last thing to happen is sending the email out, and that seems to work, then it should reach the clients inbox. Is this something to do with the PEAR mailing function not liking .eu addresses or is it more likely to be something wrong in my code or with their domain? Any help is greatly appreciated as the client and myself are getting both confused and frustrated by the whole thing. Cheers

    Read the article

  • When to choose which machine learning classifier?

    - by LM
    Suppose I'm working on some classification problem. (Fraud detection and comment spam are two problems I'm working on right now, but I'm curious about any classification task in general.) How do I know which classifier I should use? (Decision tree, SVM, Bayesian, logistic regression, etc.) In which cases is one of them the "natural" first choice, and what are the principles for choosing that one? Examples of the type of answers I'm looking for (from Manning et al.'s "Introduction to Information Retrieval book": http://nlp.stanford.edu/IR-book/html/htmledition/choosing-what-kind-of-classifier-to-use-1.html): a. If your data is labeled, but you only have a limited amount, you should use a classifier with high bias (for example, Naive Bayes). [I'm guessing this is because a higher-bias classifier will have lower variance, which is good because of the small amount of data.] b. If you have a ton of data, then the classifier doesn't really matter so much, so you should probably just choose a classifier with good scalability. What are other guidelines? Even answers like "if you'll have to explain your model to some upper management person, then maybe you should use a decision tree, since the decision rules are fairly transparent" are good. I care less about implementation/library issues, though. Also, for a somewhat separate question, besides standard Bayesian classifiers, are there 'standard state-of-the-art' methods for comment spam detection (as opposed to email spam)? [Not sure if stackoverflow is the best place to ask this question, since it's more machine learning than actual programming -- if not, any suggestions for where else?]

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Win7 playback of dvr-ms files stutters

    - by Jim Lynn
    I've just had to install Windows 7 on my Media Center machine because my Vista installation had a faulty drive. I've got the latest drivers that I can find - Intel 945GM integrated Graphics, Realtek audio drivers. Things are working OK with one exception. Playback of old recordings, from dvr-ms format files, is choppy. The picture freezes for a fraction of a second, then quickly catches up. The sound is uninterrupted and doesn't pause. These freezes happen once every 5 seconds or so. It's very regular. Playback of Live TV from the digital tuner is perfectly smooth. DVD playback is perfectly smooth. As an experiment, I used the MPEG editing package VideoReDo to create a small test file in three different formats. This program takes the raw MPEG streams and repackages them into the desired container. I took the same clip and created three files in three formats: dvr-ms (Microsoft's old recorded TV format); mpg (standard MPEG); and ts (raw MPEG transport stream of the kind often produced by PVRs). When these three files are played back under Windows 7, the mpg and ts files play smoothly, but the dvr-ms file stutters. The last piece of data I have is that two other Windows 7 machines can play back dvr-ms files smoothly with no stuttering. One is a netbook, with less grunt than the media centre. So there must be something specific about my Media Center machine that's causing the problem. Does anyone have any idea where I can look now? I don't know much about AV software, codecs, filter graphs etc. but I suspect that's where the problem lies. Rendering the video isn't the problem, but extracting the streams is. How would I go about diagnosing the problem? Edited to add: I just used the GraphStudio tool to look at the filter graph on the offending PC. The filter graph it uses by default for dvr-ms looks identical to the other machines, and, interestingly, when I play the files using GraphStudio they run smoothly. Under Windows Media Player and Windows Media Center they stutter. I'd like to see the filter graph for WMP but GraphStudio won't show it. It looks like WMP and WMC are using a different decoding path to GraphStudio. Edited again to add: Today I purchased a new HDTV. The same Media Center driving the TV at 1080p is now playing back the old Recorded TV files smoothly, without stuttering. So whatever the cause of the original problem, using a different resolution seems to have removed the problem. It might also explain why nobody else has had this problem. I doubt many people use Media Centre with a 14in portable TV.

    Read the article

  • Organize mail with Evolution & Gmail IMAP

    - by Phuong Nguyen
    With thunderbird & Gmail POP, things are very easy. I can create filter that will automatically move the mail to the appropriate sub-folder. However, with evolution & Gmail IMAP, I find things very messy. I cannot figure out how to create filter that will automatically move my email to a sub folder. Please advice. Thanks

    Read the article

  • Windows 7 playback of dvr-Microsoft files stutters

    - by Jim Lynn
    I've just had to install Windows 7 on my Media Center machine because my Vista installation had a faulty drive. I've got the latest drivers that I can find - Intel 945GM integrated Graphics, Realtek audio drivers. Things are working OK with one exception. Playback of old recordings, from dvr-Microsoft format files, is choppy. The picture freezes for a fraction of a second, then quickly catches up. The sound is uninterrupted and doesn't pause. These freezes happen once every 5 seconds or so. It's very regular. Playback of Live TV from the digital tuner is perfectly smooth. DVD playback is perfectly smooth. As an experiment, I used the MPEG editing package VideoReDo to create a small test file in three different formats. This program takes the raw MPEG streams and repackages them into the desired container. I took the same clip and created three files in three formats: dvr-Microsoft (Microsoft's old recorded TV format); mpg (standard MPEG); and ts (raw MPEG transport stream of the kind often produced by PVRs). When these three files are played back under Windows 7, the mpg and ts files play smoothly, but the dvr-Microsoft file stutters. The last piece of data I have is that two other Windows 7 machines can play back dvr-Microsoft files smoothly with no stuttering. One is a netbook, with less grunt than the media centre. So there must be something specific about my Media Center machine that's causing the problem. Does anyone have any idea where I can look now? I don't know much about AV software, codecs, filter graphs etc. but I suspect that's where the problem lies. Rendering the video isn't the problem, but extracting the streams is. How would I go about diagnosing the problem? Edited to add: I just used the GraphStudio tool to look at the filter graph on the offending PC. The filter graph it uses by default for dvr-Microsoft looks identical to the other machines, and, interestingly, when I play the files using GraphStudio they run smoothly. Under Windows Media Player and Windows Media Center they stutter. I'd like to see the filter graph for Windows Media Player but GraphStudio won't show it. It looks like Windows Media Player and WMC are using a different decoding path to GraphStudio. Edited again to add: Today I purchased a new HDTV. The same Media Center driving the TV at 1080p is now playing back the old Recorded TV files smoothly, without stuttering. So whatever the cause of the original problem, using a different resolution seems to have removed the problem. It might also explain why nobody else has had this problem. I doubt many people use Media Centre with a 14in portable TV.

    Read the article

  • Implement DNS filtering

    - by Michele
    I was wondering what would be the best way to implement DNS filterting. The scenario is this: we need to setup two custom DNS servers to use on our company's computers so that we can filter DNS requests for certain domains and return custom IPs and forward other requests to our ISPs DNS servers. We were thinking about using either PowerDNS or MyDNS because they support MySQL out of the box and we need to change the list of domains to filter quite often. Thanks a lot for your help! :)

    Read the article

  • BUILDROOT files during RPM generation

    - by khmarbaise
    Currently i have the following spec file to create a RPM. The spec file is generated by maven plugin to produce a RPM out of it. The question is: will i find files which are mentioned in the spec file after the rpm generation inside the BUILDROOT/SPECS/SOURCES/SRPMS structure? %define _unpackaged_files_terminate_build 0 Name: rpm-1 Version: 1.0 Release: 1 Summary: rpm-1 License: 2009 my org Distribution: My App Vendor: my org URL: www.my.org Group: Application/Collectors Packager: my org Provides: project Requires: /bin/sh Requires: jre >= 1.5 Requires: BASE_PACKAGE PreReq: dependency Obsoletes: project autoprov: yes autoreq: yes BuildRoot: /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/buildroot %description %install if [ -e $RPM_BUILD_ROOT ]; then mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot/* $RPM_BUILD_ROOT else mv /home/build/.jenkins/jobs/rpm-maven-plugin/workspace/target/it/rpm-1/target/rpm/rpm-1/tmp-buildroot $RPM_BUILD_ROOT fi ln -s /usr/myusr/app $RPM_BUILD_ROOT/usr/myusr/app2 ln -s /tmp/myapp/somefile $RPM_BUILD_ROOT/tmp/myapp/somefile2 ln -s name.sh $RPM_BUILD_ROOT/usr/myusr/app/bin/oldname.sh %files %defattr(-,myuser,mygroup,-) %dir "/usr/myusr/app" "/usr/myusr/app2" "/tmp/myapp/somefile" "/tmp/myapp/somefile2" "/usr/myusr/app/lib" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/start.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter-version.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/name-Linux.sh" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/filter.txt" %attr(755,myuser,mygroup) "/usr/myusr/app/bin/oldname.sh" %dir "/usr/myusr/app/conf" %config "/usr/myusr/app/conf/log4j.xml" "/usr/myusr/app/conf/log4j.xml.deliver" %prep echo "hello from prepare" %pre -p /bin/sh #!/bin/sh if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi %post #!/bin/sh #create soft link script to services directory ln -s /usr/myusr/app/bin/start.sh /etc/init.d/myapp chmod 555 /etc/init.d/myapp %preun #!/bin/sh #the argument being passed in indicates how many versions will exist #during an upgrade, this value will be 1, in which case we do not want to stop #the service since the new version will be running once this script is called #during an uninstall, the value will be 0, in which case we do want to stop #the service and remove the /etc/init.d script. if [ "$1" = "0" ] then if [ -s "/etc/init.d/myapp" ] then /etc/init.d/myapp stop rm /etc/init.d/myapp fi fi; %triggerin -- dependency, dependency1 echo "hello from install" %changelog * Tue May 23 2000 Vincent Danen <[email protected]> 0.27.2-2mdk -update BuildPreReq to include rep-gtk and rep-gtkgnome * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.2-1mdk -0.27.2 * Thu May 11 2000 Vincent Danen <[email protected]> 0.27.1-2mdk -added BuildPreReq -change name from Sawmill to Sawfish The problem i found is that the files (filter.txt in particular) after the generation process on a Ubuntu system but not on SuSE system. Which might be caused by different rpm versions ? Currently we have an integration test which fails based on the non existing of the file (filter.txt under a buildroot folder?)

    Read the article

  • Capturing same interface with tshark with same or different capture filters

    - by Pankaj Goyal
    I am in stuck in a situation where an interface will be captured more than one time. Like :- $ tshark -i rpcap://1.1.1.1/lo -f "ip proto 1" -i rpcap://1.1.1.1/lo -f "ip proto 132" or (same filter) $ tshark -i rpcap://1.1.1.1/lo -f "ip proto 1" -i rpcap://1.1.1.1/lo -f "ip proto 1" what will happen in both the cases ? In first case, will the capture filter gets OR'ed or AND'ed ?? In second case, will the same packets be captured two times ?

    Read the article

  • LogParser query to grab only external IP addresses from IIS logs?

    - by Josh
    I'm working on a public website that is used by both external visitors and internal employees. I'm after the external visitor hits, but I can't think of a good way to filter out the internal IP ranges. Using LogParser, what is the best way to filter IISW3C logs by IP range? This is all I've come up with so far, which can't possibly be the best or most efficient way. WHERE [c-ip] NOT LIKE (10.10.%, 10.11.%) Any help is appreciated.

    Read the article

  • Is there a way to suppress keyboard sounds picked up by a desktop microphone in Windows 7?

    - by Dave Andersen
    The problem I'd like to solve is that my desktop microphone picks up all my keystrokes very loudly compared with my voice. Even just lightly tapping a key without depressing it causes a loud click to be picked up. I'd like a way to filter out this type of sound, while picking up voice normally. Is there any software input equalizer/filter that could do this? Or alternatively, some sort of hardware hack?

    Read the article

  • IIS EventLog Errors

    - by chris
    I keep getting this error in my event viewer on IIS 6. I'm trying to figure out if my error resets my connection (maybe recycles the worker processes?). The error is: An attempt was made to load filter 'C:\Program Files\Software Artisans\FileUp \FileUpIsapi.dll' but it requires the SF_NOTIFY_READ_RAW_DATA filter notification and this notification is not supported in Worker Process Isolation Mode. For more information, see Help and Support Center at http://go.microsoft.com/fwlink /events.asp.

    Read the article

  • removing specific warnings in Apache error.log

    - by Tereno
    Here's my situation. I have a directive that goes something like this: Options MultiViews FollowSymLinks I have a location directive where I have Options +Includes This causes warning errors in my Apache error log: mod_include: Options +Includes (or IncludesNoExec) wasn't set, INCLUDES filter removed These warnings are gone when I add +Includes to my options but I don't want. I want to have the Includes filter removed but without having warning errors. If there's something unclear, feel free to ask. Thanks

    Read the article

  • Linux service --status-all shows "Firewall is stopped." what service does firewall refer to?

    - by codewaggle
    I have a development server with the lamp stack running CentOS: [Prompt]# cat /etc/redhat-release CentOS release 5.8 (Final) [Prompt]# cat /proc/version Linux version 2.6.18-308.16.1.el5xen ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-52)) #1 SMP Tue Oct 2 22:50:05 EDT 2012 [Prompt]# yum info iptables Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: mirror.anl.gov * extras: centos.mirrors.tds.net * rpmfusion-free-updates: mirror.us.leaseweb.net * rpmfusion-nonfree-updates: mirror.us.leaseweb.net * updates: mirror.steadfast.net Installed Packages Name : iptables Arch : x86_64 Version : 1.3.5 Release : 9.1.el5 Size : 661 k Repo : installed .... Snip.... When I run: service --status-all Part of the output looks like this: .... Snip.... httpd (pid xxxxx) is running... Firewall is stopped. Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) ....Snip.... iptables has been loaded to the kernel and is active as represented by the rules being displayed. Checking just the iptables returns the rules just like status all does: [Prompt]# service iptables status Table: filter Chain INPUT (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain FORWARD (policy DROP) num target prot opt source destination 1 RH-Firewall-1-INPUT all -- 0.0.0.0/0 0.0.0.0/0 Chain OUTPUT (policy ACCEPT) num target prot opt source destination Chain RH-Firewall-1-INPUT (2 references) .... Snip.... Starting or restarting iptables indicates that the iptables have been loaded to the kernel successfully: [Prompt]# service iptables restart Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n[ OK ] [Prompt]# service iptables start Flushing firewall rules: [ OK ] Setting chains to policy ACCEPT: filter [ OK ] Unloading iptables modules: [ OK ] Applying iptables firewall rules: [ OK ] Loading additional iptables modules: ip_conntrack_netbios_n[ OK ] I've googled "Firewall is stopped." and read a number of iptables guides as well as the RHEL documentation, but no luck. As far as I can tell, there isn't a "Firewall" service, so what is the line "Firewall is stopped." referring to?

    Read the article

  • Sendmail: Use SMART_HOST for local mail

    - by bradlis7
    We are running an outbound SPAM filter, and we'd like to be able to have the mail sent through that filter, even when it's to a local user. Is there a way to force sendmail to use the SMART_HOST, even when it is going to be delivered locally?

    Read the article

  • How can I configure OSX/Windows7 to send ALL traffic though VPN tunnel?

    - by lrrrgg
    While connected to a VPN (SwissVPN service), a content filter at a site I'm working at blocked a web page. This was perplexing, since the local site's filter should not be able to see my traffic, right? So I assume my web browsing activity was not going through the VPN tunnel. How can I configure the OS to send ALL traffic though the currently connected VPN tunnel? I'm using OSX Lion and Windows 7. Thanks!

    Read the article

  • Complex includes/excludes with rsync

    - by brianmathis
    I'm trying to work out the rsync filter syntax to perform complex include/excludes, and trying to achieve the following: Include / Exclude /home Include /home/user1/* I've tried many variations on the filter syntax, and despite reading the man page many time, I cannot get this sort of effect. Rsync filters seem to be very powerful, and I find it hard to believe they couldn't handle a common scenario such as this.

    Read the article

  • Symfony/Doctrine/SfGuardPlugin: Redirect to requested page (route), and not referrer

    - by Prasad
    I want to be able to take the user to the requested page after login, but this does not happen with sfGuard. ** My Register action requires SignIn ;) ** On the listing page [http://cim/frontend_dev.php/] - user clicks the 'Register' link [@register = register/index] - user is taken to 'Signin' page provided by sfGuard - after sign-in, user is taken back to the Listing page (instead of Register) This is quite annoying! But logical, because the referrer is the listing page. How can I change logic to make @register the referrer? Pl help. thanks public function executeSignin($request) { $user = $this->getUser(); $this->logMessage('Signin>>> form - isAuth() '.$user->isAuthenticated(), 'info'); if ($user->isAuthenticated()) { $this->getUser()->setAttribute('tenant', $this->getUser()->getGuardUser()->sfuser->Tenant->getID()); return $this->redirect($user->getReferer($request->getReferer())); } $class = sfConfig::get('app_sf_guard_plugin_signin_form', 'sfGuardFormSignin'); $this->form = new $class(); $referer = $user->getReferer($request->getReferer()); $this->logMessage('Signin>>> referer: '.$referer, 'info'); $this->logMessage('Signin>>> referer: '.$request->getReferer(), 'info'); if ($request->isMethod('post')) { $this->form->bind($request->getParameter('signin')); if ($this->form->isValid()) { $values = $this->form->getValues(); $this->getUser()->signin($values['user'], array_key_exists('remember', $values) ? $values['remember'] : false); $this->getUser()->setAttribute('tenant', $this->getUser()->getGuardUser()->sfuser->Tenant->getID()); $this->logMessage('Signin>>> sfUrl | @homepage: '.sfConfig::get('app_sf_guard_plugin_success_signin_url','@homepage'), 'info'); return $this->redirect("" != $referer ? $referer : sfConfig::get('app_sf_guard_plugin_success_signin_url','@homepage')); } } else { if ($request->isXmlHttpRequest()) { $this->getResponse()->setHeaderOnly(true); $this->getResponse()->setStatusCode(401); return sfView::NONE; } // if we have been forwarded, then the referer is the current URL // if not, this is the referer of the current request $user->setReferer($this->getContext()->getActionStack()->getSize() > 1 ? $request->getUri() : $request->getReferer()); $this->logMessage('Signin>>> oldy: '.$request->getUri(), 'info'); $this->logMessage('Signin>>> oldy: '.$request->getReferer(), 'info'); $module = sfConfig::get('sf_login_module'); if ($this->getModuleName() != $module) { return $this->redirect($module.'/'.sfConfig::get('sf_login_action')); } $this->getResponse()->setStatusCode(401); } } Trace: May 27 10:10:14 symfony [info] {sfPatternRouting} Connect sfRoute "sf_guard_signin" (/login) May 27 10:10:14 symfony [info] {sfPatternRouting} Connect sfRoute "sf_guard_signout" (/logout) May 27 10:10:14 symfony [info] {sfPatternRouting} Connect sfRoute "sf_guard_password" (/request_password) May 27 10:10:14 symfony [info] {sfPatternRouting} Match route "register" (/register) for /register with parameters array ( 'module' => 'register', 'action' => 'index',) May 27 10:10:14 symfony [info] {sfFilterChain} Executing filter "sfGuardRememberMeFilter" May 27 10:10:14 symfony [info] {sfFilterChain} Executing filter "sfRenderingFilter" May 27 10:10:14 symfony [info] {sfFilterChain} Executing filter "sfExecutionFilter" May 27 10:10:14 symfony [info] {registerActions} Call "registerActions->executeIndex()" May 27 10:10:14 symfony [info] {sfFrontWebController} Redirect to "http://cim/frontend_dev.php/login" May 27 10:10:14 symfony [info] {sfWebResponse} Send status "HTTP/1.1 302 Found" May 27 10:10:14 symfony [info] {sfWebResponse} Send header "Location: http://cim/frontend_dev.php/login" May 27 10:10:14 symfony [info] {sfWebResponse} Send header "Content-Type: text/html; charset=utf-8" May 27 10:10:14 symfony [info] {sfWebDebugLogger} Configuration 13.39 ms (9) May 27 10:10:14 symfony [info] {sfWebDebugLogger} Factories 50.02 ms (1) May 27 10:10:14 symfony [info] {sfWebDebugLogger} Action "register/index" 1.94 ms (1) May 27 10:10:14 symfony [info] {sfWebResponse} Send content (104 o) May 27 10:10:16 symfony [info] {sfPatternRouting} Connect sfRoute "sf_guard_signin" (/login) May 27 10:10:16 symfony [info] {sfPatternRouting} Connect sfRoute "sf_guard_signout" (/logout) May 27 10:10:16 symfony [info] {sfPatternRouting} Connect sfRoute "sf_guard_password" (/request_password) May 27 10:10:16 symfony [info] {sfPatternRouting} Match route "sf_guard_signin" (/login) for /login with parameters array ( 'module' => 'sfGuardAuth', 'action' => 'signin',) May 27 10:10:16 symfony [info] {sfFilterChain} Executing filter "sfGuardRememberMeFilter" May 27 10:10:16 symfony [info] {sfFilterChain} Executing filter "sfRenderingFilter" May 27 10:10:16 symfony [info] {sfFilterChain} Executing filter "sfExecutionFilter" May 27 10:10:16 symfony [info] {sfGuardAuthActions} Call "sfGuardAuthActions->executeSignin()" May 27 10:10:16 symfony [info] {sfGuardAuthActions} Signin>>> form - isAuth() May 27 10:10:16 symfony [info] {sfGuardAuthActions} Signin>>> referer: http://cim/frontend_dev.php/ May 27 10:10:16 symfony [info] {sfGuardAuthActions} Signin>>> referer: http://cim/frontend_dev.php/ May 27 10:10:16 symfony [info] {sfGuardAuthActions} Signin>>> oldy: http://cim/frontend_dev.php/login May 27 10:10:16 symfony [info] {sfGuardAuthActions} Signin>>> oldy: http://cim/frontend_dev.php/ May 27 10:10:16 symfony [info] {sfPHPView} Render "D:/projects/cim/plugins/sfDoctrineGuardPlugin/modules/sfGuardAuth/templates/signinSuccess.php" May 27 10:10:16 symfony [info] {sfPHPView} Decorate content with "D:\projects\cim\apps\frontend\templates/layout.php" May 27 10:10:16 symfony [info] {sfPHPView} Render "D:\projects\cim\apps\frontend\templates/layout.php" May 27 10:10:16 symfony [info] {main} Get slot "title" May 27 10:10:16 symfony [info] {sfWebResponse} Send status "HTTP/1.1 401 Unauthorized" May 27 10:10:16 symfony [info] {sfWebResponse} Send header "Content-Type: text/html; charset=utf-8" May 27 10:10:16 symfony [info] {sfWebDebugLogger} Configuration 16.06 ms (10) May 27 10:10:16 symfony [info] {sfWebDebugLogger} Factories 50.00 ms (1) May 27 10:10:16 symfony [info] {sfWebDebugLogger} Action "sfGuardAuth/signin" 14.53 ms (1) May 27 10:10:16 symfony [info] {sfWebDebugLogger} View "Success" for "sfGuardAuth/signin" 34.44 ms (1) May 27 10:10:16 symfony [info] {sfWebResponse} Send content (38057 o)

    Read the article

  • Using RIA Services FilterDescriptor from code behind

    - by Fermin
    Hi, I was wondering if it's possible to use the FilterDescriptor control from code behind? On the page load of my form I set the datasource of a grid in the code behind, not using a DomainDataSource control, like: TestDomainContext context = new TestDomainContext(); dataGridEmployees.ItemsSource = context.EmployeePositions; context.Load(context.GetEmployeesWithPositionQuery()); I have a textbox on my page that the user can enter into to filter on employee position. Is it now possible to add FilterDescriptor to the source of the DataGrid in code behind? Or would I manually need to filter the results of the context.GetEmployeesWithPositionQuery, for example on KeyUp event of the filter TextBox?

    Read the article

  • Laplacian of Gaussian: how does it work? (opencv)

    - by maximus
    Does anybody know how does it work and how to do it using opencv? Laplacian can be calculated using opencv, but the result is not what I expected. I mean I expect the image to be approximately constant contrast at background regions, but it is black, and edges are white. There are a lot of noise also, even after gauss filter. I filter image using gaussian filter and then apply laplace. I think what I want is done by a different way.

    Read the article

  • Android: Filtering a SimpleCursorAdapter ListView

    - by Diego Tori
    Right now, I'm running into issues trying to implement a FilterQueryProvider in my custom SimpleCursorAdapter, since I'm unsure of what to do in the FilterQueryProvider's runQuery function. In other words, since the query that comprises my ListView basically gets the rowID, name, and a third column from my databases's table, I want to be able to filter the cursor based on the partial value of the name column. However, I am uncertain of whether I can do this directly from runQuery without expanding my DB class since I want to filter the existing cursor, or will I have to create a new query function in my DB class that partially searches my name column, and if so, how would I go about creating the query statement while using the CharSequence constraint argument in runQuery? I am also concerned about the performance issues associated with trying to run multiple queries based on partial text since the DB table in question has about 1300-1400 rows. In other words, would I run into a bottleneck trying to filter the cursor?

    Read the article

  • Django Managers

    - by owca
    I have the following models code : from django.db import models from categories.models import Category class MusicManager(models.Manager): def get_query_set(self): return super(MusicManager, self).get_query_set().filter(category='Music') def count_music(self): return self.all().count() class SportManager(models.Manager): def get_query_set(self): return super(MusicManager, self).get_query_set().filter(category='Sport') class Event(models.Model): title = models.CharField(max_length=120) category = models.ForeignKey(Category) objects = models.Manager() music = MusicManager() sport = SportManager() Now by registering MusicManager() and SportManager() I am able to call Event.music.all() and Event.sport.all() queries. But how can I create Event.music.count() ? Should I call self.all() in count_music() function of MusicManager to query only on elements with 'Music' category or do I still need to filter through them in search for category first ?

    Read the article

< Previous Page | 78 79 80 81 82 83 84 85 86 87 88 89  | Next Page >