Search Results

Search found 61631 results on 2466 pages for 'duplicate data'.

Page 29/2466 | < Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >

  • Does data mining qualify as an abuse?

    - by Hybryd
    Hi all, today I had a strange experience with my ISP. They disabled my password for internet connection, and when I called them, they enabled it again, but they didn't say why it happened. In the last couple of days I was running a data mining that I made for one forum to get some useful info about business that I'm in. So I thought, maybe my ISP figured that 10,000 page requests in couple of hours to the same site may be some kind of attack. What do you think, does it qualify as an attack? Is it even ok to data mine in that way?

    Read the article

  • Excel data range - to sum series within date range

    - by Mark
    I have a set of data that I would like to manipulate but my problem is not straight forward. In this data I have date ranges that include multiple entries of the same date on some days and not on others. What I need to accomplish is to manage a trading account so that no more than 1% of the account is put at risk on any given day (retrospectively). To do this, when a series of trades falls on the same day, I need to total the risk associated with each of those trades so that I can limit the total risk of the combined trades by limiting the position size I take in each. Here is a sample set of the data I am working with. As you can see, there are 5 trades on Jan 3. Each of these trades comes with a risk value. I need to add the risk values of these 5 trades so that I can compare it to an account value and then determine if I should take more than 1 position in each trade. As you can see there are different numbers of trades that occur on the 4th, 5th 6th and 9th. I need the values returned in each row so that I can further manipulate them in the spreadsheet. I am not new to Excel, but cannot come up with a solution here - your input is much appreciated. Forgive the presentation below - I cannot upload a pic (new user) and the format does not carry across from excel. I have aligned the first several lines manually. Thx. Date ............. Pair ....... L/S ...... Initial Risk .......Win ......Loss ....BE. ....Avg Gain Avg Loss pips/swing 1/3/2012 ....EUR/USD ....S .............15 ................1 ..................................10 ..........................15. .. 1/3/2012 ....USD/CHF .....L ............15 ..........................................1 ..........0 1/3/2012 ....AUD/USD ....S .............15 ................1 .................................16 ...........................18 1/3/2012 ....NZD/USD ....S .............15 ................1 ...................................7 .............................8 1/3/2012 ....AUD/JPY .... S .............10 ................1 .................................25 ............................20 1/4/2012 ....EUR/USD ....L .............20 ................1 .................................19 ...........................19 1/4/2012 ....USD/CHF ....S ............ 15 ................1 .................................17 ...........................20 1/4/2012 EUR/JPY L 20 1 0 1/5/2012 EUR/USD L 15 1 10 20 1/5/2012 GBP/USD L 20 1 15 20 1/5/2012 USD/CHF S 15 1 0 1/5/2012 USD/JPY S 10 1 7 10 1/5/2012 USD/CAD S 15 1 28 36 1/5/2012 AUD/USD L 15 1 20 20 1/6/2012 USD/CAD S 15 1 5 -10 1/6/2012 EUR/JPY L 15 1 7 7 1/9/2012 AUD/USD S 15 1 22 30 1/9/2012 NZD/USD S 15 1 10 15

    Read the article

  • URGENT: help recovering lost data

    - by Niels Kristian
    I have made a directory: sudo mkdir /ssd, the directory was supposed to be mounted to a raid array called md3. This was done by adding /dev/md3 /ssd auto defaults 0 0 to fstab. Then after a while where I had used the directory, I realized that I had forgotten to run sudo mount -a - and then I did, and now the data is gone. I tried to uncomment the line in fstab and run sudo mount -a but that didn't get back my data. What can I do!? CONTENT OF FSTAB: proc /proc proc defaults 0 0 none /dev/pts devpts gid=5,mode=620 0 0 /dev/md/0 none swap sw 0 0 /dev/md/1 /boot ext3 defaults 0 0 /dev/md/2 / ext4 defaults 0 0 /dev/md3 /ssd auto defaults 0 0

    Read the article

  • Recover data from SD card

    - by Paul Tarjan
    I have a 2GB kingston microSD card which is about 3 years old. I put it in a reader today in my Windows Vista computer, wrote a 32MB file onto it, safely removed it, and then tried to read it elsewhere. Nothing. Putting it back in vista it now says You need to format the disk in drive F: before you can use it. What should I do? I have access to many computers and OSes if your recommendations need that. I would be very sad if I lost all the contents of the card. Most of the data is backed up, but there are a few things that aren't. :( Doing a # dd if=/dev/sdg of=~/tmp/sd.bin gives me a 2 gig file, and grepping the file it seems like lots of my data is still there, how can I put it back together?

    Read the article

  • How to have Excel data validation display different data in drop down than is actually validated

    - by Memitim
    How can I provide a user with a drop-down menu in a cell that displays the contents from one column but actually writes the value from a different column to the cell and validates against the values from that second column? I have a bit of code that very nearly does this (credit: DV0005 from the Contextures site): Private Sub Worksheet_Change(ByVal Target As range) On Error GoTo errHandler If Target.Cells.Count > 1 Then GoTo exitHandler If Target.Column = 10 Then If Target.Value = "" Then GoTo exitHandler Application.EnableEvents = False Target.Value = Worksheets("Measures").range("B1") _ .Offset(Application.WorksheetFunction _ .Match(Target.Value, Worksheets("Measures").range("Measures"), 0) - 1, 1) End If The drop-down displays the values from one column, for example Column B, but when selected actually writes the value on the same row from Column C to the cell. However, data validation is actually validating against Column B, so if I manually enter something from Column C in the cell and try to move to another cell, data validation throws an error.

    Read the article

  • Recovering data from a corrupted disk

    - by r_honey
    I use an external harddisk to backup my data (it had 3 partitions). Last week when I plugged it in, the OS (Win 7) hung up and I had to force re-boot the machine. When I turned it back on, the system just did not detect the hard-disk. It was last Sunday and I had to give up after sometime. Now I return back next Sunday (today) and when I plug it back-in to the machine, the OS detects the disk as well as all the 3 partitions on it. But it says all 3 are unformatted and I cant access any of them. Is there any way to recover data from the 3 partitions (I tried PC File Recovery and Recuva from PiriForm but neither detected these partitions).

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • Data binding in web UI frameworks, what's the deal?

    - by c-smile
    I believe that most of modern Web frameworks that pretend to be MVC ones also has a notion of data binding in one form or another. Examples: AngularJS, EmberJS, KnockoutJS, etc. I am assuming that "data binding" is a declarative definition (oxymoron, no?) of live link between data (a.k.a. model) and its representation (a.k.a. view). With some transformers in between (a.k.a. controllers). I understand why declarativeness is kind of appealing but also understand that as usual it comes with the price. In particular: 1. Live binding is quite heavy, either with dirty watch (high CPU consumption) or with Object.observe() (high memory consumption with high CPU load in some scenarios). 2. There is a "frame" part in the framework word, means there are some boundaries/limits that can be hard to overcome if you need slightly more than it was designed for. Quite usual time split: 90% of features are made in 10% of project time. But 10% rest take 90% of project time. I suspect (a.k.a. educated guess) that those MVC things are not helping to implement more functionality in less time... If so their usage motivation is not quite clear. As an example: last week wanted to find virtual list idea/solution. Found one in vanilla JavaScript that is 120 LOC. Implementation of the same but in AngualrJS is about 420 LOC. Most of the code there seems like a fight with the framework itself... So is my question: what benefits that MVC stuff or data binding give us? Is it just a buzzword popular among project managers or they give us something useful. If later one then what exactly?

    Read the article

  • Data migration - dangerous or essential?

    - by MRalwasser
    The software development department of my company is facing with the problem that data migrations are considered as potentially dangerous, especially for my managers. The background is that our customers are using a large amount of data with poor quality. The reasons for this is only partially related to our software quality, but rather to the history of the data: Most of them have been migrated from predecessor systems, some bugs caused (mostly business) inconsistencies in the data records or misentries by accident on the customer's side (which our software allowed by error). The most important counter-arguments from my managers are that faulty data may turn into even worse data, the data troubles may awake some managers at the customer and some processes on the customer's side may not work anymore because their processes somewhat adapted to our system. Personally, I consider data migrations as an integral part of the software development and that data migration can been seen to data what refactoring is to code. I think that data migration is an essential for creating software that evolves. Without it, we would have to create painful software which somewhat works around a bad data structure. I am asking you: What are your thoughts to data migration, especially for the real life cases and not only from a developer's perspecticve? Do you have any arguments against my managers opinions? How does your company deal with data migrations and the difficulties caused by them? Any other interesting thoughts which belongs to this topics?

    Read the article

  • How did we get saddled with the (hierarchical) filesystem as the basic data structure?

    - by user1936
    I'm self-taught and I don't have a CS degree. The more I've been learning about data structure, the more I wonder, in this day and age, how are we still saddled with the filesystem, with directories and files, as the basic data storage structure on the OS? I understand the simplicity of it, but it seems nowadays that there could be more options available natively. As far as I'm aware, the only project to improve the basic functionality of the filesystem was ReiserFS, where you could tell what line of a file was changed by whom, and when. For instance, if I could have native tagging for files, where I could tag images, diagrams, word-processing documents, an entire code repository, all as belonging to a single project, that would really be helpful to me. Since I'm stuck in the filesystem paradigm, I know that I could put all those into a single folder/directory, but what if they already exist in disparate directories, and they need to stay there? I know there are programs out there that can do this, but why aren't they on the filesystem? Something that would be nice to have is some kind of relational feature in the filesystem, like you get with RDBMSes. I understand that that was supposed to be part of Vista/7, but that fell off the feature list too. Sure, any program can store a binary file and have any data structure it wants in it, by why couldn't the OS offer more complex ways of storing data, beyond the simple heirarchy of the filesystem?

    Read the article

  • How do I cluster strings based on a relation between two strings?

    - by Tom Wijsman
    If you don't know WEKA, you can try a theoretical answer. I don't need literal code/examples... I have a huge data set of strings in which I want to cluster the strings to find the most related ones, these could as well be seen as duplicate. I already have a set of couples of string for which I know that they are duplicate to each other, so, now I want to do some data mining on those two sets. The result I'm looking for is a system that would return me the possible most relevant couples of strings for which we don't know yet that they are duplicates, I believe that I need clustering for this, which type? Note that I want to base myself on word occurrence comparison, not on interpretation or meaning. Here is an example of two string of which we know they are duplicate (in our vision on them): The weather is really cold and it is raining. It is raining and the weather is really cold. Now, the following strings also exist (most to least relevant, ignoring stop words): Is the weather really that cold today? Rainy days are awful. I see the sunshine outside. The software would return the following two strings as most relevant, which aren't known to be duplicate: The weather is really cold and it is raining. Is the weather really that cold today? Then, I would mark that as duplicate or not duplicate and it would present me with another couple. How do I go to implement this in the most efficient way that I can apply to a large data set?

    Read the article

  • Good fix vs Quick fix [duplicate]

    - by Andrea Girardi
    This question already has an answer here: Does craftsmanship pay off? [duplicate] 16 answers Good design: How much hackyness is acceptable? [duplicate] 9 answers How do you balance between “do it right” and “do it ASAP” in your daily work? 14 answers Let's start from this principle: quality is a feature that you can't add to a project in the middle of the development process. This is the scenario: two weeks to go live with my project and, one of the developers added a specific method used only for one web application to our framework (Our framework is a bounce of java classes used to extract content from MongoDB, Alfresco, mySql and it's used by web applications). I'm the team leader and I told him to generalize the method to keep the framework to keep reusable but he said "no, I prefer don't do that because there are a lot of bugs that need to be fixed". The manager is agree with him and of course I'm not. Is it better to made extra effort to keep a framework free from any specific implementation (probably used only by one web application) or just add the methods because it works? So, my question is: is it correct to write code that only works or is better to write code that works but it doesn't sucks (i.e. adding embedded value, specific methods, extra classes, add column to database, etc)? How is it possible to justify the extra time (to be honest, this kind of fix requires 10 minutes extra to write a good generic code) to the management? How is possible to argue it's the right way to write code to young developers and PM? in general, good fix or quick fix? Ah, 10 minutes after I get the email from PM, he asked me why on a url of application 2 there was the name of application 1 during the login? I like to quote Jeff Atwood: "Don't leave "broken windows" (bad designs, wrong decisions, or poor code) unrepaired. Fix each one as soon as it is discovered. " Excerpt From: Hyperink. "How-To-Stop-Sucking-And-Be-Awesome-Instead." iBooks.

    Read the article

  • example.com/blog vs blog.example.com [duplicate]

    - by Mario Duarte
    Possible Duplicate: Subdomain versus subdirectory I'm about to start my own blog (adding it to a domain already owned by me) and I'm wondering what is the best way to set it up. There are two common alternatives for blogs: domain.com/blog and blog.domain.com. My question is: what are the advantages and disadvantages and of each alternative and which one do you think is the best? (in terms of SEO, etc)

    Read the article

  • Myth Busting the Duplicate Content Myth

    Using previously published articles such as you find on article sites does not mean you are going to suffer the wrath of search engines and suffer the "Duplicate Content Penalty" death sentence. If you respect your readers, your guest bloggers and article authors, then you should not worry. If you are trying to deceive readers or search engines then you will get in trouble.

    Read the article

  • Integrating Oracle Hyperion Smart View Data Queries with MS Word and Power Point

    - by Andreea Vaduva
    Untitled Document table { border: thin solid; } Most Smart View users probably appreciate that they can use just one add-in to access data from the different sources they might work with, like Oracle Essbase, Oracle Hyperion Planning, Oracle Hyperion Financial Management and others. But not all of them are aware of the options to integrate data analyses not only in Excel, but also in MS Word or Power Point. While in the past, copying and pasting single numbers or tables from a recent analysis in Excel made the pasted content a static snapshot, copying so called Data Points now creates dynamic, updateable references to the data source. It also provides additional nice features, which can make life easier and less stressful for Smart View users. So, how does this option work: after building an ad-hoc analysis with Smart View as usual in an Excel worksheet, any area including data cells/numbers from the database can be highlighted in order to copy data points - even single data cells only.   TIP It is not necessary to highlight and copy the row or column descriptions   Next from the Smart View ribbon select Copy Data Point. Then transfer to the Word or Power Point document into which the selected content should be copied. Note that in these Office programs you will find a menu item Smart View;from it select the Paste Data Point icon. The copied details from the Excel report will be pasted, but showing #NEED_REFRESH in the data cells instead of the original numbers. =After clicking the Refresh icon on the Smart View menu the data will be retrieved and displayed. (Maybe at that moment a login window pops up and you need to provide your credentials.) It works in the same way if you just copy one single number without any row or column descriptions, for example in order to incorporate it into a continuous text: Before refresh: After refresh: From now on for any subsequent updates of the data shown in your documents you only need to refresh data by clicking the Refresh button on the Smart View menu, without copying and pasting the context or content again. As you might realize, trying out this feature on your own, there won’t be any Point of View shown in the Office document. Also you have seen in the example, where only a single data cell was copied, that there aren’t any member names or row/column descriptions copied, which are usually required in an ad-hoc report in order to exactly define where data comes from or how data is queried from the source. Well, these definitions are not visible, but they are transferred to the Word or Power Point document as well. They are stored in the background for each individual data cell copied and can be made visible by double-clicking the data cell as shown in the following screen shot (but which is taken from another context).   So for each cell/number the complete connection information is stored along with the exact member/cell intersection from the database. And that’s not all: you have the chance now to exchange the members originally selected in the Point of View (POV) in the Excel report. Remember, at that time we had the following selection:   By selecting the Manage POV option from the Smart View meny in Word or Power Point…   … the following POV Manager – Queries window opens:   You can now change your selection for each dimension from the original POV by either double-clicking the dimension member in the lower right box under POV: or by selecting the Member Selector icon on the top right hand side of the window. After confirming your changes you need to refresh your document again. Be aware, that this will update all (!) numbers taken from one and the same original Excel sheet, even if they appear in different locations in your Office document, reflecting your recent changes in the POV. TIP Build your original report already in a way that dimensions you might want to change from within Word or Power Point are placed in the POV. And there is another really nice feature I wouldn’t like to miss mentioning: Using Dynamic Data Points in the way described above, you will never miss or need to search again for your original Excel sheet from which values were taken and copied as data points into an Office document. Because from even only one single data cell Smart View is able to recreate the entire original report content with just a few clicks: Select one of the numbers from within your Word or Power Point document by double-clicking.   Then select the Visualize in Excel option from the Smart View menu. Excel will open and Smart View will rebuild the entire original report, including POV settings, and retrieve all data from the most recent actual state of the database. (It might be necessary to provide your credentials before data is displayed.) However, in order to make this work, an active online connection to your databases on the server is necessary and at least read access to the retrieved data. But apart from this, your newly built Excel report is fully functional for ad-hoc analysis and can be used in the common way for drilling, pivoting and all the other known functions and features. So far about embedding Dynamic Data Points into Office documents and linking them back into Excel worksheets. You can apply this in the described way with ad-hoc analyses directly on Essbase databases or using Hyperion Planning and Hyperion Financial Management ad-hoc web forms. If you are also interested in other new features and smart enhancements in Essbase or Hyperion Planning stay tuned for coming articles or check our training courses and web presentations. You can find general information about offerings for the Essbase and Planning curriculum or other Oracle-Hyperion products here (please make sure to select your country/region at the top of this page) or in the OU Learning paths section , where Planning, Essbase and other Hyperion products can be found under the Fusion Middleware heading (again, please select the right country/region). Or drop me a note directly: [email protected] . About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.  

    Read the article

  • Find Duplicate Items in a Table

    - by Derek Dieter
    A very common scenario when querying tables is the need to find duplicate items within the same table. To do this is simple, it requires utilizing the GROUP BY clause and counting the number of recurrences. For example, lets take a customers table. Within the customers table, we want to find all [...]

    Read the article

  • Find Duplicate Fields in a Table

    - by Derek Dieter
    A common scenario when querying tables is the need to find duplicate fields within the same table. To do this is simple, it requires utilizing the GROUP BY clause and counting the number of recurrences. For example, lets take a customers table. Within the customers table, we want to find all the [...]

    Read the article

  • Finding duplicate files?

    - by ub3rst4r
    I am going to be developing a program that detects duplicate files and I was wondering what the best/fastest method would be to do this? I am more interested in what the best hash algorithm would be to do this? For example, I was thinking of having it get the hash of each files contents and then group the hashes that are the same. Also, should there be a limit set for what the maximum file size can be or is there a hash that is suitable for large files?

    Read the article

  • .htaccess / 301 redirection question

    - by John K
    All my WordPress post URLs generate subdirectories with duplicate content and I do not know what regular expression to use to consistently 301 redirect domain.com/category/post/random-number/ to domain.com/category/post/ and domain.com/category/post/random-number/another-random-number/ also to domain.com/category/post/. Here is an example of my problem: http://www.example.com/features/harb-constitution-not-to-allow-kr-provinces-to-receive-foreign-officials/ http://www.example.com/features/harb-constitution-not-to-allow-kr-provinces-to-receive-foreign-officials/1345257927000/

    Read the article

  • How to find first cached date and time of a website?

    - by John Sanjay
    I found some of my webpage contents has been copied in other website. I need to check the cache date and time of both pages so that i can compare and guess which page is original and which one is duplicate according to Google. Because i know that if Google found two webpages with same content, it consider the first cached page by Google bot as unique. Is there any online tool for checking that? or other ways?

    Read the article

  • Display resolution in duplicate monitor

    - by Taher
    I use duplicate one monitor laptop LCD and other monitor that monitor resolution is bigger than laptop LCD how can i set laptop LCD resolution for them? when i use mirror button it set 1024 * 768 but my laptop LCD resolution is 1366 * 768 how can i set this resolution for them? because when i set this resolution i get error. My laptop is hp dv6 6080 and vga is intel sandy bridge if i change to AMD vga can i resolve this problem?

    Read the article

  • Canonical redirection meta tag [duplicate]

    - by sankalp
    This question already has an answer here: How to use rel='canonical' properly 2 answers There are two pages in my website with the same content; only the URL's are different: www.websitename.com and www.websitename.com/default.html. Someone suggested that I should add canonical tags to avoid them being considered as duplicate content. Where should I add canonical tags and why?

    Read the article

< Previous Page | 25 26 27 28 29 30 31 32 33 34 35 36  | Next Page >