Search Results

Search found 18096 results on 724 pages for 'let me be'.

Page 418/724 | < Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >

  • Calculating holidays

    - by Ralph Shillington
    A number of holidays move around from year to year. For example, in Canada Victoria day (aka the May two-four weekend) is the Monday before May 25th, or Thanksgiving is the 2nd Monday of October (in Canada). I've been using variations on this Linq query to get the date of a holiday for a given year: var year = 2011; var month = 10; var dow = DayOfWeek.Monday; var instance = 2; var day = (from d in Enumerable.Range(1,DateTime.DaysInMonth(year,month)) let sample = new DateTime(year,month,d) where sample.DayOfWeek == dow select sample).Skip(instance-1).Take(1); While this works, and is easy enough to understand, I can imagine there is a more elegant way of making this calculation versus this brute force approach. Of course this doesn't touch on holidays such as Easter and the many other lunar based dates.

    Read the article

  • How to find largest common sub-tree in the given two binary search trees?

    - by Bhushan
    Two BSTs (Binary Search Trees) are given. How to find largest common sub-tree in the given two binary trees? EDIT 1: Here is what I have thought: Let, r1 = current node of 1st tree r2 = current node of 2nd tree There are some of the cases I think we need to consider: Case 1 : r1.data < r2.data 2 subproblems to solve: first, check r1 and r2.left second, check r1.right and r2 Case 2 : r1.data > r2.data 2 subproblems to solve: - first, check r1.left and r2 - second, check r1 and r2.right Case 3 : r1.data == r2.data Again, 2 cases to consider here: (a) current node is part of largest common BST compute common subtree size rooted at r1 and r2 (b)current node is NOT part of largest common BST 2 subproblems to solve: first, solve r1.left and r2.left second, solve r1.right and r2.right I can think of the cases we need to check, but I am not able to code it, as of now. And it is NOT a homework problem. Does it look like?

    Read the article

  • What, if any, is wrong with this definition of letrec in Scheme?

    - by Lajla
    R5RS gives proposed macro definitions for library forms of syntax: http://schemers.org/Documents/Standards/R5RS/HTML/r5rs-Z-H-10.html#%_sec_7.3 Which also defines letrec, in a very complicated way, certainly not how I would define it, I would simply use: (define-syntax letrec2 (syntax-rules () ((letrec2 ((name val) ...) body bodies ...) ((lambda () (define name val) ... body bodies ...))))) As far as I understand the semantics of letrec, which I use very often as a named let. It works in this way, however as I've had my fair share of debates with philosophers who think they can just disprove special relativity or established phonological theories, I know that when you think you have a simple solution to a complex problem, it's probably WRONG. There has got to be some point where this macro does not satify the semantics of letrec else they'd probably have used it. In this definition, the definitions are local to the body of the letrec, they can refer to each other for mutual recursion, I'm not quite sure what (if any) is wrong.

    Read the article

  • Inserting records in Mysql with INSERT IGNORE and NULL values

    - by Homer1980ar
    I have a partitioned table InnoDB with several fields. I'm trying to avoid duplicates on insert. Let's say: Field1 int null Field2 int null Field3 int null Field4 int null Field5 int null I have created a UNIQUE index on those fields. I try to insert some records with NULL values and then try to reinsert them with IGNORE feature on MySql. Unfortunately it seems to replicated the records when using NULL values. If I try with zeros instead of NULL cases everything works, but I do need the nulls there. Any idea? Thanks, Leonardo

    Read the article

  • Batch file search and replace using wildcards?

    - by user329358
    Batch file search and replace using wildcards I have a html (txt) file I am using as a template or sourcefile to create further html files.  Filename = pg_0001.htm and it contains a line of code thus: pg_0001.jpg I want to parse the pg_0001.htm sourcefile, increment and replace the jpeg string, like this:  "pg_0002.jpg", and then output the edited htm file to a new filename pg_0002.htm I then take each newly created file (pg_0002.htm, pg_0003.htm etc) as the sourcefile and repeat the processing until I have reached my target goal (let's say 100 newly created htm files containing code to display the corresponding jpeg. It must be done this way (fileX.htm containing fileX.jpg) because there is other javascript that uses these incremented filenames as function input. I used to know how to write incrementing batch files many years ago but I'm old & very rusty now.  Can anyone please help me do this?  Many thanks in advance. regards Harry

    Read the article

  • Manually logging in a user without password

    - by Agos
    Hi everybody; I hope you can help me figure the best way to implement a manual (server-side initiated) login without using the password. Let me explain the workflow: User registers Thank you! An email with an activation link has been sent blablabla (Account now exists but is marked not enabled) User opens email, clicks link (Account is enabled) Thank you! You can now use the site What I'm trying to do is log in the user after he has clicked the email link so he can start using the website right away. I can't use his password since it's encrypted in the DB, is the only option writing a custom authentication backend?

    Read the article

  • Caffeine and Stimulant Usage

    - by Jon Purdy
    Let's see how many of us fit the old stereotype, shall we? Do you typically use caffeine when programming? During the day or at night? How frequently do you pull all-nighters? Do you use caffeine when you do stay up late? Do you prefer to have a large amount of caffeine all at once, or small amounts over a longer period of time? Do you use energy drinks, 5-hour energy shots, coffee, tea, or caffeine pills? How about other stimulants such as amphetamines? For instance, I've known a programmer who dabbled in speed because they believed it increased their ability to focus on programming, though happily they're clean now and quite honest about the whole experience. Share, discuss, find the Ballmer Peak of caffeine, enjoy. Happy Easter.

    Read the article

  • How to extend a website?

    - by eltados
    This is quite a concept idea. I would like to create a website that can be extend by different programmer a bit "a la facebook" Let's me explain i want to develop a very simple core application that for example would store images and i want to develop or allow external developer to develop web app that would be able to act on the image i can take this example of an OS that would store files and you can "install" different program for example to view the files or edit. How can i reproduce the model in the Web / cloud plateform using API ? I hope this question make sense to any body. Thank you by advance

    Read the article

  • How to develop an app for Mac OS X that keeps reading everything the user types in?

    - by Elomar Nascimento dos Santos
    Hello, everybody. I'm here to ask if any of you know how to develop an app for Mac OS X that keeps reading everything the user types in. An example of app that implements this behavior is Text Expander. Text Expander reads everything the user types in, searching for abbreviations previously added on it. When one of this abbreviations is found, Text Expander replace the abbreviation form for the entire content related to that abbreviation. So, I would like to know what resource of Objective-C or Cocoa let you do this kind of stuff. P.S.: Just to mention, I'm not thinking about developing something like a key logger. I'm just curious and thinking about at developing a snippet platform.

    Read the article

  • What is a good programming language for testers who are not great programmers?

    - by Brian T Hannan
    We would like to create some simple automated tests that will be created and maintained by testers. Right now we have a tester who can code in any language, but in the future we might want any tester with a limited knowledge of programming to be able to add or modify the tests. What is a good programming language for testers who are not great programmers, or programmers at all? Someone suggested LUA, but I looked into LUA and it might be more complicated that another language would be. Preferably, the language will be interpreted and not be compiled. Let me know what you think.

    Read the article

  • How can I read JSON file from disk and store to array in Swift

    - by Ezekiel Elin
    I want to read a file from Disk in a swift file. It can be a relative or direct path, that doesn't matter. How can I do that? I've been playing with something like this let classesData = NSData .dataWithContentsOfMappedFile("path/to/classes.json"); And it finds the file (i.e. doesn't return nil) but I don't know how to manipulate and convert to JSON, the data returned. It isn't in a string format and String() isn't working on it.

    Read the article

  • How to initialize F# list when size is unknown, using while..do loop

    - by James Black
    I have a function that will parse the results of a DataReader, and I don't know how many items are returned, so I want to use a while..do loop to iterate over the reader, and the outcome should be a list of a certain type. (fun(reader) -> [ while reader.Read() do new CityType(Id=(reader.GetInt32 0), Name=(reader.GetString 1), StateName=(reader.GetString 2)) ]) This is what I tried, but the warning I get is: This expression should have type 'unit', but has type 'CityType'. Use 'ignore' to discard the result of the expression, or 'let' to bind the result to a name. So what is the best way to iterate over a DataReader and create a list?

    Read the article

  • Changing the tint of a bitmap while preserving the overall brightness

    - by MusiGenesis
    I'm trying to write a function that will let me red-shift or blue-shift a bitmap while preserving the overall brightness of the image. Basically, a fully red-shifted bitmap would have the same brightness as the original but be thoroughly red-tinted (i.e. the G and B values would be equal for all pixels). Same for blue-tinting (but with R and G equal). The degree of spectrum shifting needs to vary from 0 to 1. Thanks in advance.

    Read the article

  • Is there a graphics/game engine that supports PC & Mac?

    - by Chris Masterton
    Is their a graphics that runs on both Mac & PC? I've seen Unity and thats a possibility, I'm wondering if there are any other choices. Ideally I want to port the same C++ game code to both PC & Mac platforms, but let the underlying game/graphics engine take advantage of the appropriate hardware. edit: I'm looking on the level of Torque, Gamebryo & Unreal. A commercial solution is perfectly acceptable. Thanks, Chris

    Read the article

  • How to preserve object identity across different VMs

    - by wheleph
    To be specific let me illustrate the question with Spring http-remoting example. Suppose we have such implementation of a simple interface: public SearchServiceImpl implements SearchService { public SearchJdo processSearch(SearchJdo search) { search.name = "a funky name"; return search; } } SearchJdo is itself a simple POJO. Now when we call the method from a client through http-remoting we'll get: public class HTTPClient { public static void main(final String[] arguments) { final ApplicationContext context = new ClassPathXmlApplicationContext( "spring-http-client-config.xml"); final SearchService searchService = (SearchService) context.getBean("searchService"); SearchJdo search = new SearchJdo(); search.name = "myName"; // this method actually returns the same object it gets as an argument SearchJdo search2 = searchService.processSearch(search); System.out.println(search == search2); // prints "false" } } The problem is that the search objects are different because of serializaton although from logical prospective they are the same. The question is whether there are some technique that allows to support or emulate object identity across VMs.

    Read the article

  • How can i assign a two dimensional array into other temporary two dimensional array.....?? in C Programming..

    - by AGeek
    Hi I am trying to store the contents of two dimensional array into a temporary array.... How is it possible... I don't want looping over here, as it would add an extra overhead.. Any pointer notation would be good. struct bucket { int nStrings; char strings[MAXSTRINGS][MAXWORDLENGTH]; }; void func() { char **tArray; int tLenArray = 0; for(i=0; i<TOTBUCKETS-1; i++) { if(buck[i].nStrings != 0) { tArray = buck[i].strings; tLenArray = buck[i].nStrings; } } } The error here i am getting is:- [others@centos htdocs]$ gcc lexorder.c lexorder.c: In function âlexSortingâ: lexorder.c:40: warning: assignment from incompatible pointer type Please let me know if this needs some more explanaition...

    Read the article

  • A question on vectors, pointers and iterators

    - by xbonez
    Guys, I have a midterm examination tomorrow, and I was looking over the sample paper, and I'm not sure about this question. Any help would be appreciated. Let v be a vector<Thingie*>, so that each element v[i] contains a pointer to a Thingie. If p is a vector<Thingie*>::iterator, answer the following questions: what type is p? what type is *p? what code provides the address of the actual Thingie? what code provides the actual Thingie?

    Read the article

  • unique constraint (w/o Trigger) on "one-to-many" relation

    - by elgcom
    To illustrate the problem, I make an example: A tag_bundle consists of one or more than one tags. A unique tag combination can map to a unique tag_bundle, vice versa. tag_bundle tag tag_bundle_relation +---------------+ +--------+ +---------------+--------+ | tag_bundle_id | | tag_id | | tag_bundle_id | tag_id | +---------------+ +--------+ +---------------+--------+ | 1 | | 100 | | 1 | 100 | +---------------+ +--------+ +---------------+--------+ | 101 | | 1 | 101 | +--------+ +---------------+--------+ There can't be another tag_bundle having the combination from tag 100 and tag 101. How can I ensure such unique constraint when executing SQL "concurrently"!! that is, to prevent concurrently adding two bundles with the same tag combination Adding a simple unique constraint on any table does not work, Is there any solution other than Trigger or explicit lock. I come to only this simple way: make tag combination into string, and let it be unique. tag_bundle (unique on tags) tag tag_bundle_relation +---------------+--------+ +--------+ +---------------+--------+ | tag_bundle_id | tags | | tag_id | | tag_bundle_id | tag_id | +---------------+--------+ +--------+ +---------------+--------+ | 1 | 100,101| | 100 | | 1 | 100 | +---------------+--------+ +--------+ +---------------+--------+ | 101 | | 1 | 101 | +--------+ +---------------+--------+ but it seems not a good way :(

    Read the article

  • What are the uses of svn copy?

    - by nav.jdwdw
    Example: $ svn copy foo.txt bar.txt A bar.txt When would you use this technique, and why? Will this command (taken from svn's "red book") creates a copy of <foo.txt> while preserving the history of it to be shared with <bar.txt>? If I'm changing <bar.txt>, what will happen to <foo.txt>? What are the equivalents to this in other modern systems (Clearcase, Accurev, Perforce)? Clarification: Let me emphasize the point I'm searching for: Is this kind of branching out on a file level? What happens if you use it in the same branch, i.e. create a copy of a file and than start changing that new file. all in the same branch? I understand that it is also used for tagging but what is interesting me is what to expect when performing <svn copy> On the file level

    Read the article

  • Book for a Windows Application

    - by cateof
    Hello everybody. I want to create an small GUI Windows application that looks like all the other usual appz. I am searching for a book that describes the whole procedure. Let's say an address book application that can be have a small database, minimized in the task bar, doing things in the background and so on. I don't care for the language. But I would prefer to do it in .NET C++. I know it is a "very" general question, so Thanks in advance.

    Read the article

  • Reverse Breath First Search in C#

    - by Ngu Soon Hui
    Anyone has a ready implementation of the Reverse Breath First Search algorithm in C#? By Reverse Breath First Search, I mean instead of searching a tree starting from a common node, I want to search the tree from the bottom and gradually converged to a common node. Let's see the below figure, this is the output of a Breath First Search: In my reverse breath first search, 9,10,11 and 12 will be the first few nodes found ( the order of them are not important as they are all first order). 5, 6, 7 and 8 are the second few nodes found, and so on. 1 would be the last node found. Any ideas or pointers?

    Read the article

  • How to Load Oracle Tables From Hadoop Tutorial (Part 5 - Leveraging Parallelism in OSCH)

    - by Bob Hanckel
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 Using OSCH: Beyond Hello World In the previous post we discussed a “Hello World” example for OSCH focusing on the mechanics of getting a toy end-to-end example working. In this post we are going to talk about how to make it work for big data loads. We will explain how to optimize an OSCH external table for load, paying particular attention to Oracle’s DOP (degree of parallelism), the number of external table location files we use, and the number of HDFS files that make up the payload. We will provide some rules that serve as best practices when using OSCH. The assumption is that you have read the previous post and have some end to end OSCH external tables working and now you want to ramp up the size of the loads. Using OSCH External Tables for Access and Loading OSCH external tables are no different from any other Oracle external tables.  They can be used to access HDFS content using Oracle SQL: SELECT * FROM my_hdfs_external_table; or use the same SQL access to load a table in Oracle. INSERT INTO my_oracle_table SELECT * FROM my_hdfs_external_table; To speed up the load time, you will want to control the degree of parallelism (i.e. DOP) and add two SQL hints. ALTER SESSION FORCE PARALLEL DML PARALLEL  8; ALTER SESSION FORCE PARALLEL QUERY PARALLEL 8; INSERT /*+ append pq_distribute(my_oracle_table, none) */ INTO my_oracle_table SELECT * FROM my_hdfs_external_table; There are various ways of either hinting at what level of DOP you want to use.  The ALTER SESSION statements above force the issue assuming you (the user of the session) are allowed to assert the DOP (more on that in the next section).  Alternatively you could embed additional parallel hints directly into the INSERT and SELECT clause respectively. /*+ parallel(my_oracle_table,8) *//*+ parallel(my_hdfs_external_table,8) */ Note that the "append" hint lets you load a target table by reserving space above a given "high watermark" in storage and uses Direct Path load.  In other doesn't try to fill blocks that are already allocated and partially filled. It uses unallocated blocks.  It is an optimized way of loading a table without incurring the typical resource overhead associated with run-of-the-mill inserts.  The "pq_distribute" hint in this context unifies the INSERT and SELECT operators to make data flow during a load more efficient. Finally your target Oracle table should be defined with "NOLOGGING" and "PARALLEL" attributes.   The combination of the "NOLOGGING" and use of the "append" hint disables REDO logging, and its overhead.  The "PARALLEL" clause tells Oracle to try to use parallel execution when operating on the target table. Determine Your DOP It might feel natural to build your datasets in Hadoop, then afterwards figure out how to tune the OSCH external table definition, but you should start backwards. You should focus on Oracle database, specifically the DOP you want to use when loading (or accessing) HDFS content using external tables. The DOP in Oracle controls how many PQ slaves are launched in parallel when executing an external table. Typically the DOP is something you want to Oracle to control transparently, but for loading content from Hadoop with OSCH, it's something that you will want to control. Oracle computes the maximum DOP that can be used by an Oracle user. The maximum value that can be assigned is an integer value typically equal to the number of CPUs on your Oracle instances, times the number of cores per CPU, times the number of Oracle instances. For example, suppose you have a RAC environment with 2 Oracle instances. And suppose that each system has 2 CPUs with 32 cores. The maximum DOP would be 128 (i.e. 2*2*32). In point of fact if you are running on a production system, the maximum DOP you are allowed to use will be restricted by the Oracle DBA. This is because using a system maximum DOP can subsume all system resources on Oracle and starve anything else that is executing. Obviously on a production system where resources need to be shared 24x7, this can’t be allowed to happen. The use cases for being able to run OSCH with a maximum DOP are when you have exclusive access to all the resources on an Oracle system. This can be in situations when your are first seeding tables in a new Oracle database, or there is a time where normal activity in the production database can be safely taken off-line for a few hours to free up resources for a big incremental load. Using OSCH on high end machines (specifically Oracle Exadata and Oracle BDA cabled with Infiniband), this mode of operation can load up to 15TB per hour. The bottom line is that you should first figure out what DOP you will be allowed to run with by talking to the DBAs who manage the production system. You then use that number to derive the number of location files, and (optionally) the number of HDFS data files that you want to generate, assuming that is flexible. Rule 1: Find out the maximum DOP you will be allowed to use with OSCH on the target Oracle system Determining the Number of Location Files Let’s assume that the DBA told you that your maximum DOP was 8. You want the number of location files in your external table to be big enough to utilize all 8 PQ slaves, and you want them to represent equally balanced workloads. Remember location files in OSCH are metadata lists of HDFS files and are created using OSCH’s External Table tool. They also represent the workload size given to an individual Oracle PQ slave (i.e. a PQ slave is given one location file to process at a time, and only it will process the contents of the location file.) Rule 2: The size of the workload of a single location file (and the PQ slave that processes it) is the sum of the content size of the HDFS files it lists For example, if a location file lists 5 HDFS files which are each 100GB in size, the workload size for that location file is 500GB. The number of location files that you generate is something you control by providing a number as input to OSCH’s External Table tool. Rule 3: The number of location files chosen should be a small multiple of the DOP Each location file represents one workload for one PQ slave. So the goal is to keep all slaves busy and try to give them equivalent workloads. Obviously if you run with a DOP of 8 but have 5 location files, only five PQ slaves will have something to do and the other three will have nothing to do and will quietly exit. If you run with 9 location files, then the PQ slaves will pick up the first 8 location files, and assuming they have equal work loads, will finish up about the same time. But the first PQ slave to finish its job will then be rescheduled to process the ninth location file, potentially doubling the end to end processing time. So for this DOP using 8, 16, or 32 location files would be a good idea. Determining the Number of HDFS Files Let’s start with the next rule and then explain it: Rule 4: The number of HDFS files should try to be a multiple of the number of location files and try to be relatively the same size In our running example, the DOP is 8. This means that the number of location files should be a small multiple of 8. Remember that each location file represents a list of unique HDFS files to load, and that the sum of the files listed in each location file is a workload for one Oracle PQ slave. The OSCH External Table tool will look in an HDFS directory for a set of HDFS files to load.  It will generate N number of location files (where N is the value you gave to the tool). It will then try to divvy up the HDFS files and do its best to make sure the workload across location files is as balanced as possible. (The tool uses a greedy algorithm that grabs the biggest HDFS file and delegates it to a particular location file. It then looks for the next biggest file and puts in some other location file, and so on). The tools ability to balance is reduced if HDFS file sizes are grossly out of balance or are too few. For example suppose my DOP is 8 and the number of location files is 8. Suppose I have only 8 HDFS files, where one file is 900GB and the others are 100GB. When the tool tries to balance the load it will be forced to put the singleton 900GB into one location file, and put each of the 100GB files in the 7 remaining location files. The load balance skew is 9 to 1. One PQ slave will be working overtime, while the slacker PQ slaves are off enjoying happy hour. If however the total payload (1600 GB) were broken up into smaller HDFS files, the OSCH External Table tool would have an easier time generating a list where each workload for each location file is relatively the same.  Applying Rule 4 above to our DOP of 8, we could divide the workload into160 files that were approximately 10 GB in size.  For this scenario the OSCH External Table tool would populate each location file with 20 HDFS file references, and all location files would have similar workloads (approximately 200GB per location file.) As a rule, when the OSCH External Table tool has to deal with more and smaller files it will be able to create more balanced loads. How small should HDFS files get? Not so small that the HDFS open and close file overhead starts having a substantial impact. For our performance test system (Exadata/BDA with Infiniband), I compared three OSCH loads of 1 TiB. One load had 128 HDFS files living in 64 location files where each HDFS file was about 8GB. I then did the same load with 12800 files where each HDFS file was about 80MB size. The end to end load time was virtually the same. However when I got ridiculously small (i.e. 128000 files at about 8MB per file), it started to make an impact and slow down the load time. What happens if you break rules 3 or 4 above? Nothing draconian, everything will still function. You just won’t be taking full advantage of the generous DOP that was allocated to you by your friendly DBA. The key point of the rules articulated above is this: if you know that HDFS content is ultimately going to be loaded into Oracle using OSCH, it makes sense to chop them up into the right number of files roughly the same size, derived from the DOP that you expect to use for loading. Next Steps So far we have talked about OLH and OSCH as alternative models for loading. That’s not quite the whole story. They can be used together in a way that provides for more efficient OSCH loads and allows one to be more flexible about scheduling on a Hadoop cluster and an Oracle Database to perform load operations. The next lesson will talk about Oracle Data Pump files generated by OLH, and loaded using OSCH. It will also outline the pros and cons of using various load methods.  This will be followed up with a final tutorial lesson focusing on how to optimize OLH and OSCH for use on Oracle's engineered systems: specifically Exadata and the BDA. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;}

    Read the article

  • Parallel CURL function Help .. php

    - by Webby
    Hello.. Firstly let me explain the code below is just a tiny snippet of the code I'm using on the working site. Basically I'm hoping someone can help me rewrite just the function below to enable parallel CURL calls... that way it will fit nicely into the existing code without me having to rewrite the whole from the ground up like some of the samples I've been finding today any ideas? function get_data($url) { $ch = curl_init(); $timeout = 5; curl_setopt($ch,CURLOPT_URL,$url); curl_setopt($ch,CURLOPT_RETURNTRANSFER,1); curl_setopt($ch,CURLOPT_CONNECTTIMEOUT,5); $data = curl_exec($ch); curl_close($ch); return $data; } p.s. $url goes through a huge bunch of urls in a loop already so I'd hole to keep that intact.. Help always appreciated and rewarded

    Read the article

  • Subdomain mapping to another external subdomain

    - by Davorin
    I'm trying to map help.domain1.com to help.domain2.com. I've seen this on UserVoice. They let you map something.yourdomain.com to something.uservoice.com. On domain1.com I've set up a CNAME to help.domain2.com. It works fine but when I open help.domain1.com I get the content of domain2.com instead of help.domain2.com. After some experimenting I've realized that this is an expected behavior. So my question is what do I have to do on domain2.com (or maybe on domain1.com?) to have it show content of subdomain "help.domain2.com" when I navigate help.domain1.com? (I'm using Plesk and the OS is Windows Server 2003)

    Read the article

  • I want a machine to learn to categorize short texts

    - by Jasie
    Hello, I have a ton of short stories about 500 words long and I want to categorize them into one of, let's say, 20 categories: Entertainment Food Music etc I can hand-classify a bunch of them, but I want to implement machine learning to guess the categories eventually. What's the best way to approach this? Is there a standard approach to machine learning I should be using? I don't think a decision tree would work well since it's text data...I'm completely new in this field. Any help would be appreciated, thanks!

    Read the article

< Previous Page | 414 415 416 417 418 419 420 421 422 423 424 425  | Next Page >