Search Results

Search found 3841 results on 154 pages for 'daily deals india'.

Page 56/154 | < Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >

  • Search-friendly way to store checkbox values in MySQL?

    - by Alex
    What is a search-friendly way to store checkbox values in the database? Currently, checkboxes are processed as an array and values are separated by a ";" As such: <input type="checkbox" name="frequency[]" value="Daily"/> Daily <input type="checkbox" name="frequency[]" value="Weekly"/> Weekly <input type="checkbox" name="frequency[]" value="Monthly"/> Monthly The PHP backend runs implode(';', $frequency) and adds the string to the database. This works fine but it's a nightmare when it comes to searching. Is there a better way to approach this?

    Read the article

  • Architecture for analysing search result impressions/clicks to improve future searches

    - by Hais
    We have a large database of items (10m+) stored in MySQL and intend to implement search on metadata on these items, taking advantage of something like Sphinx. The dataset will be changing slightly on a daily basis so Sphinx will be re-indexing daily. However we want the algorithm to self-learn and improve search results by analysing impression and click data so that we provide better results for our customers on that search term, and possibly other similar search terms too. I've been reading up on Hadoop and it seems like it has the potential to crunch all this data, although I'm still unsure how to approach it. Amazon has tutorials for compiling impression vs click data using MapReduce but I can't see how to get this data in a useable format. My idea is that when a search term comes in I query Sphinx to get all the matching items from the dataset, then query the analytics (compiled on an hourly basis or similar) so that we know the most popular items for that search term, then cache the final results using something like Memcached, Membase or similar. Am I along the right lines here?

    Read the article

  • What is the best way of doing this? (WCF 4)

    - by Jason Porter
    I have a multith-threaded, continusly running application that connects with multiple devices via TCP/IP sockets and exposes a set of WCF API's for controlling, monitoring and reporting on these devices. I would like to host this on IIS for the ususal reasons of not having to worry about re-starting the app in case of errors. So the issue I have is the main application running in parallel with the WCF Servies. To accomplish this I use the static AppInitialize class to start a thread which has the main applicaiton loop. The WCF services mostly report or control the shared objects with this thread. There are two problems that I see with this approach. One is that if the thread dies, IIS has no clue to re-start it so I have to play some tricks with some WCF calls. The other is that the backrgound thread deals with potentially thousands of devices that are connected permanently (typically a thread per socket connection). So I am not sure if IIS is buying me anything in this case. Another approach that I am thinking is to use WF for the main application that deals with the sockets and host both the WF and my WCF services in IIS using AppFabric. Since I have not use WF or AppFabric I am reaching out to see if this would be good approach or there are better alternative.

    Read the article

  • how to delete a line from file using awk filtered by some string

    - by embedded
    I have a file delimited by space. I need to write an awk command that receives a host name argument and it should replace the host name if it already defined in the file. It must be a full match not partially - if the file contains this host name: localhost searching for "ho" will fail and it will be added to the end of the file. another option is a delete: again awk receives host name argument and it should remove it from the file if exists. This is what I have so far: (It needs some enhancements) if [ "$DELETE_FLAG" == "" ]; then # In this case the entry should be added or updated # if clause deals with updating an existing entry # END clause deals with adding a new entry awk -F"[ ]" "BEGIN { found = 0;} \ { \ if ($2 == $HOST_NAME) { \ print \"$IP_ADDRESS $HOST_NAME\"; \ found = 1; \ } else { \ print \$0; \ } \ } \ END { \ if (found == 0) { \ print \"$IP_ADDRESS $HOST_NAME\"; } \ } " \ /etc/hosts > /etc/temp_hosts else # Delete an existing entry awk -F'[ ]' '{if($2 != $HOST_NAME) { print $0} }' /etc/hosts > /etc/temp_hosts fi Thanks

    Read the article

  • insert into table where if not in list

    - by jim smith
    Can anybody help me with the syntax? insert into history (company,partnumber,price) values ('blah','IFS0090','0.00') if company NOT IN ('blah','blah2','blah3','blah4','blah4') and partnumber='IFS0090'; Background: I have a history table which stores daily company, products and prices. But sometimes a company will remove itself for a few days. Complicating the issue is because I'm only saving daily CHANGES to prices only and not snapshotting the entire days list (the data would be huge) when I display the data the company will still come up for the previous days price. So I need to do something like this, where a 0.00 price means they're no longer there.

    Read the article

  • getting webpage contents using curl_init not working for some links

    - by Manish
    I am using this code to get contents of a url entered:- class MetaTagParser { public $metadata; private $html; private $url; public function __construct($url) { $this->url=$url; $this->html= $this->file_get_contents_curl(); $this->set_title(); $this->set_meta_properties(); } public function file_get_contents_curl() { $ch = curl_init(); curl_setopt($ch, CURLOPT_HEADER, 0); curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1); curl_setopt($ch, CURLOPT_URL, $this->url); curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1); $data = curl_exec($ch); curl_close($ch); return $data; } public function set_title() { $doc = new DOMDocument(); @$doc->loadHTML($this->html); $nodes = $doc->getElementsByTagName('title'); $this->metadata['title'] = $nodes->item(0)->nodeValue; } this class works for some pages but for some url like this one - http://www.dnaindia.com/india/report_in-a-first-upa-govt-tweets-the-press_1745346 when I try to fetch data I get this error:-"Warning: get_meta_tags(http://www.dnaindia.com/india/report_in-a-first-upa-govt-tweets-the-press_1745346): failed to open stream: HTTP request failed! HTTP/1.1 403 Forbidden in C:\xampp\htdocs\prac\index.php on line 52" it is not working, any ideas why this is happening??

    Read the article

  • Pass database data to multiples views-Laravel

    - by user3696018
    I have a database with details of daily sales. To query a database, I have a form in a view with parameters that will query as date of admission, client and others. The result is shown in another view with the daily details of income, and below is a summary of the article do all entered. The summary I wish to transfer to another view, try to view :: composer but only transfer the empty query (I saw it with debug bar). Just appeared an empty view. How I can transfer data from the database without the latter view is empty? The second html view is totaly diferent , only the data is the same.

    Read the article

  • Testing Entity Framework applications, pt. 3: NDbUnit

    - by Thomas Weller
    This is the third of a three part series that deals with the issue of faking test data in the context of a legacy app that was built with Microsoft's Entity Framework (EF) on top of an MS SQL Server database – a scenario that can be found very often. Please read the first part for a description of the sample application, a discussion of some general aspects of unit testing in a database context, and of some more specific aspects of the here discussed EF/MSSQL combination. Lately, I wondered how you would ‘mock’ the data layer of a legacy application, when this data layer is made up of an MS Entity Framework (EF) model in combination with a MS SQL Server database. Originally, this question came up in the context of how you could enable higher-level integration tests (automated UI tests, to be exact) for a legacy application that uses this EF/MSSQL combo as its data store mechanism – a not so uncommon scenario. The question sparked my interest, and I decided to dive into it somewhat deeper. What I've found out is, in short, that it's not very easy and straightforward to do it – but it can be done. The two strategies that are best suited to fit the bill involve using either the (commercial) Typemock Isolator tool or the (free) NDbUnit framework. The use of Typemock was discussed in the previous post, this post now will present the NDbUnit approach... NDbUnit is an Apache 2.0-licensed open-source project, and like so many other Nxxx tools and frameworks, it is basically a C#/.NET port of the corresponding Java version (DbUnit namely). In short, it helps you in flexibly managing the state of a database in that it lets you easily perform basic operations (like e.g. Insert, Delete, Refresh, DeleteAll)  against your database and, most notably, lets you feed it with data from external xml files. Let's have a look at how things can be done with the help of this framework. Preparing the test data Compared to Typemock, using NDbUnit implies a totally different approach to meet our testing needs.  So the here described testing scenario requires an instance of an SQL Server database in operation, and it also means that the Entity Framework model that sits on top of this database is completely unaffected. First things first: For its interactions with the database, NDbUnit relies on a .NET Dataset xsd file. See Step 1 of their Quick Start Guide for a description of how to create one. With this prerequisite in place then, the test fixture's setup code could look something like this: [TestFixture, TestsOn(typeof(PersonRepository))] [Metadata("NDbUnit Quickstart URL",           "http://code.google.com/p/ndbunit/wiki/QuickStartGuide")] [Description("Uses the NDbUnit library to provide test data to a local database.")] public class PersonRepositoryFixture {     #region Constants     private const string XmlSchema = @"..\..\TestData\School.xsd";     #endregion // Constants     #region Fields     private SchoolEntities _schoolContext;     private PersonRepository _personRepository;     private INDbUnitTest _database;     #endregion // Fields     #region Setup/TearDown     [FixtureSetUp]     public void FixtureSetUp()     {         var connectionString = ConfigurationManager.ConnectionStrings["School_Test"].ConnectionString;         _database = new SqlDbUnitTest(connectionString);         _database.ReadXmlSchema(XmlSchema);         var entityConnectionStringBuilder = new EntityConnectionStringBuilder         {             Metadata = "res://*/School.csdl|res://*/School.ssdl|res://*/School.msl",             Provider = "System.Data.SqlClient",             ProviderConnectionString = connectionString         };         _schoolContext = new SchoolEntities(entityConnectionStringBuilder.ConnectionString);         _personRepository = new PersonRepository(this._schoolContext);     }     [FixtureTearDown]     public void FixtureTearDown()     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         _schoolContext.Dispose();     }     ...  As you can see, there is slightly more fixture setup code involved if your tests are using NDbUnit to provide the test data: Because we're dealing with a physical database instance here, we first need to pick up the test-specific connection string from the test assemblies' App.config, then initialize an NDbUnit helper object with this connection along with the provided xsd file, and also set up the SchoolEntities and the PersonRepository instances accordingly. The _database field (an instance of the INdUnitTest interface) will be our single access point to the underlying database: We use it to perform all the required operations against the data store. To have a flexible mechanism to easily insert data into the database, we can write a helper method like this: private void InsertTestData(params string[] dataFileNames) {     _database.PerformDbOperation(DbOperationFlag.DeleteAll);     if (dataFileNames == null)     {         return;     }     try     {         foreach (string fileName in dataFileNames)         {             if (!File.Exists(fileName))             {                 throw new FileNotFoundException(Path.GetFullPath(fileName));             }             _database.ReadXml(fileName);             _database.PerformDbOperation(DbOperationFlag.InsertIdentity);         }     }     catch     {         _database.PerformDbOperation(DbOperationFlag.DeleteAll);         throw;     } } This lets us easily insert test data from xml files, in any number and in a  controlled order (which is important because we eventually must fulfill referential constraints, or we must account for some other stuff that imposes a specific ordering on data insertion). Again, as with Typemock, I won't go into API details here. - Unfortunately, there isn't too much documentation for NDbUnit anyway, other than the already mentioned Quick Start Guide (and the source code itself, of course) - a not so uncommon problem with smaller Open Source Projects. Last not least, we need to provide the required test data in xml form. A snippet for data from the People table might look like this, for example: <?xml version="1.0" encoding="utf-8" ?> <School xmlns="http://tempuri.org/School.xsd">   <Person>     <PersonID>1</PersonID>     <LastName>Abercrombie</LastName>     <FirstName>Kim</FirstName>     <HireDate>1995-03-11T00:00:00</HireDate>   </Person>   <Person>     <PersonID>2</PersonID>     <LastName>Barzdukas</LastName>     <FirstName>Gytis</FirstName>     <EnrollmentDate>2005-09-01T00:00:00</EnrollmentDate>   </Person>   <Person>     ... You can also have data from various tables in one single xml file, if that's appropriate for you (but beware of the already mentioned ordering issues). It's true that your test assembly may end up with dozens of such xml files, each containing quite a big amount of text data. But because the files are of very low complexity, and with the help of a little bit of Copy/Paste and Excel magic, this appears to be well manageable. Executing some basic tests Here are some of the possible tests that can be written with the above preparations in place: private const string People = @"..\..\TestData\School.People.xml"; ... [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] public void GetNameList_ListOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.List);     Assert.Count(34, names);     Assert.AreEqual("Abercrombie, Kim", names.First());     Assert.AreEqual("Zheng, Roger", names.Last()); } [Test, MultipleAsserts, TestsOn("PersonRepository.GetNameList")] [DependsOn("RemovePerson_CalledOnce_DecreasesCountByOne")] public void GetNameList_NormalOrdering_ReturnsTheExpectedFullNames() {     InsertTestData(People);     List<string> names =         _personRepository.GetNameList(NameOrdering.Normal);     Assert.Count(34, names);     Assert.AreEqual("Alexandra Walker", names.First());     Assert.AreEqual("Yan Li", names.Last()); } [Test, TestsOn("PersonRepository.AddPerson")] public void AddPerson_CalledOnce_IncreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.AddPerson(new Person { FirstName = "Thomas", LastName = "Weller" });     Assert.AreEqual(count + 1, _personRepository.Count); } [Test, TestsOn("PersonRepository.RemovePerson")] public void RemovePerson_CalledOnce_DecreasesCountByOne() {     InsertTestData(People);     int count = _personRepository.Count;     _personRepository.RemovePerson(new Person { PersonID = 33 });     Assert.AreEqual(count - 1, _personRepository.Count); } Not much difference here compared to the corresponding Typemock versions, except that we had to do a bit more preparational work (and also it was harder to get the required knowledge). But this picture changes quite dramatically if we look at some more demanding test cases: Ok, and what if things are becoming somewhat more complex? Tests like the above ones represent the 'easy' scenarios. They may account for the biggest portion of real-world use cases of the application, and they are important to make sure that it is generally sound. But usually, all these nasty little bugs originate from the more complex parts of our code, or they occur when something goes wrong. So, for a testing strategy to be of real practical use, it is especially important to see how easy or difficult it is to mimick a scenario which represents a more complex or exceptional case. The following test, for example, deals with the case that there is some sort of invalid input from the caller: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] [Row(null, typeof(ArgumentNullException))] [Row("", typeof(ArgumentException))] [Row("NotExistingCourse", typeof(ArgumentException))] public void GetCourseMembers_WithGivenVariousInvalidValues_Throws(string courseTitle, Type expectedInnerExceptionType) {     var exception = Assert.Throws<RepositoryException>(() =>                                 _personRepository.GetCourseMembers(courseTitle));     Assert.IsInstanceOfType(expectedInnerExceptionType, exception.InnerException); } Apparently, this test doesn't need an 'Arrange' part at all (see here for the same test with the Typemock tool). It acts just like any other client code, and all the required business logic comes from the database itself. This doesn't always necessarily mean that there is less complexity, but only that the complexity happens in a different part of your test resources (in the xml files namely, where you sometimes have to spend a lot of effort for carefully preparing the required test data). Another example, which relies on an underlying 1-n relationship, might be this: [Test, MultipleAsserts, TestsOn("PersonRepository.GetCourseMembers")] public void GetCourseMembers_WhenGivenAnExistingCourse_ReturnsListOfStudents() {     InsertTestData(People, Course, Department, StudentGrade);     List<Person> persons = _personRepository.GetCourseMembers("Macroeconomics");     Assert.Count(4, persons);     Assert.ForAll(         persons,         @p => new[] { 10, 11, 12, 14 }.Contains(@p.PersonID),         "Person has none of the expected IDs."); } If you compare this test to its corresponding Typemock version, you immediately see that the test itself is much simpler, easier to read, and thus much more intention-revealing. The complexity here lies hidden behind the call to the InsertTestData() helper method and the content of the used xml files with the test data. And also note that you might have to provide additional data which are not even directly relevant to your test, but are required only to fulfill some integrity needs of the underlying database. Conclusion The first thing to notice when comparing the NDbUnit approach to its Typemock counterpart obviously deals with performance: Of course, NDbUnit is much slower than Typemock. Technically,  it doesn't even make sense to compare the two tools. But practically, it may well play a role and could or could not be an issue, depending on how much tests you have of this kind, how often you run them, and what role they play in your development cycle. Also, because the dataset from the required xsd file must fully match the database schema (even in parts that otherwise wouldn't be relevant to you), it can be quite cumbersome to be in a team where different people are working with the database in parallel. My personal experience is – as already said in the first part – that Typemock gives you a better development experience in a 'dynamic' scenario (when you're working in some kind of TDD-style, you're oftentimes executing the tests from your dev box, and your database schema changes frequently), whereas the NDbUnit approach is a good and solid solution in more 'static' development scenarios (when you need to execute the tests less frequently or only on a separate build server, and/or the underlying database schema can be kept relatively stable), for example some variations of higher-level integration or User-Acceptance tests. But in any case, opening Entity Framework based applications for testing requires a fair amount of resources, planning, and preparational work – it's definitely not the kind of stuff that you would call 'easy to test'. Hopefully, future versions of EF will take testing concerns into account. Otherwise, I don't see too much of a future for the framework in the long run, even though it's quite popular at the moment... The sample solution A sample solution (VS 2010) with the code from this article series is available via my Bitbucket account from here (Bitbucket is a hosting site for Mercurial repositories. The repositories may also be accessed with the Git and Subversion SCMs - consult the documentation for details. In addition, it is possible to download the solution simply as a zipped archive – via the 'get source' button on the very right.). The solution contains some more tests against the PersonRepository class, which are not shown here. Also, it contains database scripts to create and fill the School sample database. To compile and run, the solution expects the Gallio/MbUnit framework to be installed (which is free and can be downloaded from here), the NDbUnit framework (which is also free and can be downloaded from here), and the Typemock Isolator tool (a fully functional 30day-trial is available here). Moreover, you will need an instance of the Microsoft SQL Server DBMS, and you will have to adapt the connection strings in the test projects App.config files accordingly.

    Read the article

  • Migrate IMAP account between providers - client access only

    - by Pekka
    I have an IMAP E-Mail account with my old provider. I have a new, empty IMAP account with the new provider. Is there a tool or Thunderbird to migrate the E-Mail data from one account to another? I'm a bit wary about just doing a drag & drop in Thunderbird because it's quite a lot of data, and I have a deep distrust against how Thunderbird deals with IMAP data.

    Read the article

  • Scheduled Mail in asp.net

    - by Chendur Paandian
    Hai Guys, My application deals scheduled mail concept (i.e) every morning 6.00 am my users gets a remainder mail about their activities for the day... I dont know how to do this.... Many told use windows service but i will host my website on a shared server i may not get rights to do windows service... Is there any dll for sending mails at a schduled time through asp.net application ..please help me out guys......

    Read the article

  • how do i enable ftp on a cisco NAC 3310

    - by kyoung
    I'm trying to ftp updates to it, but i can't seem to find where the .conf is that deals with ftp so i can enable/configure it when i attempt to connect to the NAC from my desktop via winSCP (using ftp) i get an error saying the connection is being actively refused.

    Read the article

  • List of all IBM motherboard models with a certain socket?

    - by Ricket
    I just got a really good deal on two Intel Quad Core Xeon L5420 processors and I have access to other deals on bare IBM servers (case+motherboard, no processor/ram/hdd). How can I easily find out what server or motherboard models will be compatible with this processor? I am ideally looking for a dual-processor motherboard and I see that it is socket LGA771. So I guess the underlying question is, how can I find what IBM motherboards (and servers) have dual socket LGA 771?

    Read the article

  • What laptop for dev with VS

    - by Gareth
    Im looking to spend upto £500 on a laptop in the next few days, and am now after a bit of adivce. This laptop will be used to develop personal projects on using VS2010, and SQL sever express. I know this sort of question has been asked plenty before however Im looking for specifics, has anyone bought a laptop recently to run VS on and if so what did you get? has anyone seen any good deals on laptops that fit in my price range? Any suggestions apprieciated Thanks

    Read the article

  • How DNS server resolves when web servers are geographically distributed

    - by Supratik
    Hi A domain abc.com has two web servers located in two different location one in India and another in Malaysia. If the request are handled by the servers depending on the location from where the request originates then how DNS server resolves for such geographically distributed servers when my client system is configured to a local DNS server in Indian or a DNS server in Malyasia ? Warm Regards Supratik

    Read the article

  • Arch linux as a wireless router with a USB modem

    - by orlox
    I'm trying to act as an access point to share the internet I get from a USB modem on arch linux. From what I've seen so far, most of what I've found deals with installing particular distributions like DD-WRT to this purpose, but I haven't been able to find any particular and comprehensive solution. Has anyone done this before? I don't know how relevant it might be, but my wireless card is a broadcom device.

    Read the article

  • List of all IBM server models with a certain socket?

    - by Ricket
    I just got a really good deal on two Intel Quad Core Xeon L5420 processors and I have access to other deals on bare IBM servers (case+motherboard, no processor/ram/hdd). How can I easily find out what server or motherboard models will be compatible with this processor? I am ideally looking for a dual-processor motherboard and I see that it is socket LGA771. So I guess the underlying question is, how can I find what IBM motherboards (and servers) have dual socket LGA 771?

    Read the article

  • linux macvlan - stop from broadcasting hostname

    - by staticfloat
    I am trying to simulate two different computers on one box, using the macvlan module (which is awesome, by the way) but I have one small problem; When I create the macvlan Ubuntu 11.10 very helpfully starts broadcasting its hostname on both interfaces, creating an amazing amount of confusion for everything that deals with hostnames. Does anyone know how to stop ubuntu from advertising its hostname on a certain interface? Thanks!

    Read the article

  • How to have Vista OS send documents to printer without waiting in front?

    - by Greenleader
    Hi, I got a print server on our old printer. I have a problem, because Vista has its own queue. I want to bypass this queue and send everything straight away to printer so the print server deals with queue and not vista. Problem is, when second document is being printed from the same computer after first one. Vista is still waiting for info on finishing the first job even 5 minutes after it was REALLY finished.

    Read the article

  • Internet latency map

    - by David
    I would like to see a latency map, showing the lowest latencies achieved between various destinations around the world. What is for example the lowest latencies achieved between Denmark and India. This could for example be used for planning of where to place a server farm for online games.

    Read the article

  • Logging process' CPU utilisation

    - by frinky
    Hello everyone, following problem deals with MS Windows Server 2008 R2 with Hyper-V: Does anybody have an idea how to log processes which cause CPU utilisation more than X percent? I want to uncover an unexpected CPU load peak problem which occurs once a day in a regular fashion. Since it's a terminal server, all network connections time out and bandwidth tends to zero.

    Read the article

< Previous Page | 52 53 54 55 56 57 58 59 60 61 62 63  | Next Page >