Search Results

Search found 37122 results on 1485 pages for 'text analysis'.

Page 23/1485 | < Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >

  • Looking for Linux text editor

    - by Daniel
    I'm looking for VIM replacement. My key points are: Extensible in sane language (such as Python, Ruby, or even Lua, after vimscript everything will do). Also GUI part should be extensible too, so no SublimeText2. GUI. Preferrably GTK+. Lightweight. I don't understand IDEs like Eclipse/NetBeans consuming up to 1G of RAM. File browser panel. Splits, tabs and windows. There should be ability to split views tabs infinite number of times (or while they fit to screen). VCS support (optional: especially Git) Snippets & autocompletion (not mandatory, but I would very love to have those) Any ideas?

    Read the article

  • Adding Actions to a Cube in SQL Server Analysis Services 2008

    Actions are powerful way of extending the value of SSAS cubes for the end user. They can click on a cube or portion of a cube to start an application with the selected item as a parameter, or to retrieve information about the selected item. Actions haven't been well-documented until now; Robert Sheldon once more makes everything clear.

    Read the article

  • Adding Actions to a Cube in SQL Server Analysis Services 2008

    Actions are powerful way of extending the value of SSAS cubes for the end user. They can click on a cube or portion of a cube to start an application with the selected item as a parameter, or to retrieve information about the selected item. Actions haven't been well-documented until now; Robert Sheldon once more makes everything clear.

    Read the article

  • Modify Sublime Text 2 whitespace representation?

    - by Mike Grace
    Is there a way to modify the whitespace representation characters so I can change it from dots and dashes to something else? Because I currently have whitespace characters being drawn always, it looks like this. I don't need it turned off, just interested in changing how it's represented. I like how TextMate shows invisible characters but I would be ok with just being able to change the spaces to show a blank space instead of a dot.

    Read the article

  • Recovering text files in terminal using grep on Mac OS X Snow Leopard

    - by littlejim84
    I foolishly removed some source code from my Mac OS X Snow Leopard machine with rm -rf when doing something with buildout. I want to try and recover these files again. I haven't touched the system since to try and seek an answer. I found this article and it seems like the grep method is the way to go, but when running it on my machine I'm getting 'Resource busy' when trying to run it on the disk. I'm using this command: sudo grep -a -B1000 -A1000 'video_output' /dev/disk0s2 > file.txt Where 'dev/disk0s2' is what came up when I ran df. I get this when running: grep: /dev/disk0s2: Resource busy I'm not an expert with this stuff, I'm trying my best. Please can anyone help me further? I'm on the verge of losing two days of source code work! Thank you

    Read the article

  • Requirment Analysis Communication

    - by Rahul Mehta
    Hi, Someday s ago we was discussing about the current project, and suddenly sir and my senior started talking about the new feature to add in the project , and i become lost :). i was not able to find how i should provide my input for the new feature. So i want to know what things should be discussed for developing new feature in project and how we can contribute in requirement talk of new features. Please suggest.

    Read the article

  • What follows after lexical analysis?

    - by madflame991
    I'm working on a toy compiler (for some simple language like PL/0) and I have my lexer up and running. At this point I should start working on building the parse tree, but before I start I was wondering: How much information can one gather from just the string of tokens? Here's what I gathered so far: One can already do syntax highlighting having only the list of tokens. Numbers and operators get coloured accordingly and keywords also. Autoformatting (indenting) should also be possible. How? Specify for each token type how many white spaces or new line characters should follow it. Also when you print tokens modify an alignment variable (when the code printer reads "{" increment the alignment variable by 1, and decrement by 1 for "}". Whenever it starts printing on a new line the code printer will align according to this alignment variable) In languages without nested subroutines one can get a complete list of subroutines and their signature. How? Just read what follows after the "procedure" or "function" keyword until you hit the first ")" (this should work fine in a Pascal language with no nested subroutines) In languages like Pascal you can even determine local variables and their types, as they are declared in a special place (ok, you can't handle initialization as well, but you can parse sequences like: "var a, b, c: integer") Detection of recursive functions may also be possible, or even a graph representation of which subroutine calls who. If one can identify the body of a function then one can also search if there are any mentions of other function's names. Gathering statistics about the code, like number of lines, instructions, subroutines EDIT: I clarified why I think some processes are possible. As I read comments and responses I realise that the answer depends very much on the language that I'm parsing.

    Read the article

  • Indexing text file content with command line query

    - by Drew Carlton
    I take daily notes in a plaintext file labeled with date in the YYYYMMDD format. These files are no more than 100 lines long, and are written in a blog style format. I'd like to be able search these files as if they were blog posts indexed by google, with some phrase query returning the most relevant/recent date filenames, with a snippet containing the relevant part. Ideally it would be something like this: #searchindex "laptop no sound" returns: 20100909.txt: ... laptop sound isn't working... 20100101.txt ... sound is too loud... debating what laptop to buy... and so on and so forth. I'm working on a linux platform (Debian with GNOME). I've looked at beagle and tracker, but they just seem complete overkill for what I want.

    Read the article

  • Requesting quality analysis test cases up front of implementation/change

    - by arin
    Recently I have been assigned to work on a major requirement that falls between a change request and an improvement. The previous implementation was done (badly) by a senior developer that left the company and did so without leaving a trace of documentation. Here were my initial steps to approach this problem: Considering that the release date was fast approaching and there was no time for slip-ups, I initially asked if the requirement was a "must have". Since the requirement helped the product significantly in terms of usability, the answer was "If possible, yes". Knowing the wide-spread use and affects of this requirement, had it come to a point where the requirement could not be finished prior to release, I asked if it would be a viable option to thrash the current state and revert back to the state prior to the ex-senior implementation. The answer was "Most likely: no". Understanding that the requirement was coming from the higher management, and due to the complexity of it, I asked all usability test cases to be written prior to the implementation (by QA) and given to me, to aid me in the comprehension of this task. This was a big no-no for the folks at the management as they failed to understand this approach. Knowing that I had to insist on my request and the responsibility of this requirement, I insisted and have fallen out of favor with some of the folks, leaving me in a state of "baffledness". Basically, I was trying a test-driven approach to a high-risk, high-complexity and must-have requirement and trying to be safe rather than sorry. Is this approach wrong or have I approached it incorrectly? P.S.: The change request/improvement was cancelled and the implementation was reverted back to the prior state due to the complexity of the problem and lack of time. This only happened after a 2 hour long meeting with other seniors in order to convince the aforementioned folks.

    Read the article

  • Google page events monitoring and analysis

    - by Homunculus Reticulli
    I have read the Google page event documentation, but I am not sure I understand it correctly. I am new to Google analytics, and I have two questions: Once I have google analytics enabled for my site (i.e. I have inserted the tracking code in my pages etc), do I need to set anything else up (at the Google end - i.e. in my Google analytics account) It is not clear to me how the event data particularly, relating to how the data can be aggregated and analyzed. For instance, if I want to track an event under category category for click action action, I will use the following code snippet: <a href="some-uri.htm" onclick="_gaq.push(['_trackEvent', 'category', 'action', 'label']);">Do Something</a> For the sake of simplicity, lets say I am interested in monitoring click events in my header and footer, and I want to find which pages the header and or footer is clicked most often. How would I set things up so that I can analyze the header/footer clicks aggregated at the page level?

    Read the article

  • SSISDB Analysis Script on Gist

    - by Davide Mauri
    I've created two simple, yet very useful, script to extract some useful data to quickly monitor SSIS packages execution in SQL Server 2012 and after.get-ssis-execution-status  get-ssis-data-pumped-rows  I've started to use gist since it comes very handy, for this "quick'n'dirty" scripts and snippets, and you can find the above scripts and others (hopefully the number will increase over time...I plan to use gist to store all the code snippet I used to store in a dedicated folder on my machine) there.Now, back to the aforementioned scripts. The first one ("get-ssis-execution-status") returns a list of all executed and executing packages along with latest successful and running executions (so that on can have an idea of the expected run time)error messageswarning messages related to duplicate rows found in lookupsthe second one ("get-ssis-data-pumped-rows") returns information on DataFlows status. Here there's something interesting, IMHO. Nothing exceptional, let it be clear, but nonetheless useful: the script extract information on destinations and row sent to destinations right from the messages produced by the DataFlow component. This helps to quickly understand how many rows as been sent and where...without having to increase the logging level.Enjoy! PSI haven't tested it with SQL Server 2014, but AFAIK they should work without problems. Of course any feedback on this is welcome. 

    Read the article

  • Software Usability analysis

    - by Afnan
    i am unable to find the answers to the following questions.Please help me resolve (a) Name quantitative and qualitative techniques for analysing the usability of a software product. (b) Compare the costs and bene?ts of the quantitative techniques. (c) Compare the costs and bene?ts of the qualitative techniques. (d ) If restricted to a single one of these techniques when designing a new online banking system, which would you choose and why?

    Read the article

  • Master Data Management – A Foundation for Big Data Analysis

    - by Manouj Tahiliani
    While Master Data Management has crossed the proverbial chasm and is on its way to becoming mainstream, businesses are being hammered by a new megatrend called Big Data. Big Data is characterized by massive volumes, its high frequency, the variety of less structured data sources such as email, sensors, smart meters, social networks, and Weblogs, and the need to analyze vast amounts of data to determine value to improve upon management decisions. Businesses that have embraced MDM to get a single, enriched and unified view of Master data by resolving semantic discrepancies and augmenting the explicit master data information from within the enterprise with implicit data from outside the enterprise like social profiles will have a leg up in embracing Big Data solutions. This is especially true for large and medium-sized businesses in industries like Retail, Communications, Financial Services, etc that would find it very challenging to get comprehensive analytical coverage and derive long-term success without resolving the limitations of the heterogeneous topology that leads to disparate, fragmented and incomplete master data. For analytical success from Big Data or in other words ROI from Big Data Investments, businesses need to acquire, organize and analyze the deluge of data to make better decisions. There will need to be a coexistence of structured and unstructured data and to maintain a tight link between the two to extract maximum insights. MDM is the catalyst that helps maintain that tight linkage by providing an understanding about the identity, characteristics of Persons, Companies, Products, Suppliers, etc. associated with the Big Data and thereby help accelerate ROI. In my next post I will discuss about patterns for co-existing Big Data Solutions and MDM. Feel free to provide comments and thoughts on above as well as Integration or Architectural patterns.

    Read the article

  • PASS Summit Location follow up - result analysis

    - by simonsabin
    I've had a chance to look at the results directly and it is clear that there is a tough choice. On the one hand people are saying that they prefer to have PASS put their money into chapters and things like 24hrs of PASS rather than an event on the east coast. Whilst at the same time almost 50% more people said they would be more likely to attend an East Coast event than a Seattle event, and 60% more said they would be more likley to attend a US Central region event. Whats more 60% said that the summit should be outside of Seattle every other year with only 19% saying it should always stay in Seattle. So clearly there is a huge desire for a non Seattle event. Looking at the other reasons for keeping in Seattle and the big one being that people want Microsoft speakers. More people think its somewhat important of very important that the conference is in walking distance of the hotels and restaurants. Essentially the Q6 questions show an even balance for normal conference, highlighting that they are prepared to travel, not with the family and they want a well laid out conference. Whats very annoying is that the questions, as people have commented, were biased towards certain answers. For instance there was no option about whether people feel its important to have industry leading speakers, MVPs etc at the conference. Only questions about Microsoft speakers. I know survey writing is very difficult to avoid biasing the answers one way or another. There was also no choice to show peoples preference, would people prefer Microsoft speakers or the summit to be held on the East Coast/Central US. I also find it amazing that people prefer hundres of developers rather than the SQLCAT and CSS teams, surely that indicates another issue about a lack of understanding of what the these teams do. All in all it is clear that people showed they want an event outside of Seattle and don't want PASS to be putting money into that instead of into other community activites. I find it suprising that there appears to have been a huge weighting against certain questions which have prioritised them over the huge desire for a PASS summit outside of Seattle. Lets see where we will be in 2013 or maybe they will rethink 2012 who knows.

    Read the article

  • Free tools for SQL Server - Automating Execution Plan Analysis

    - by jchang
    Since this topic is being discussed, I will plug my own tools, SQL Exec Stats and (a little dated) documentation the main capability is cross-referencing index usuage with specific execution plans. another feature is generating execution plans for all stored procedures in a database, along with the index usage cross-reference. There are several sources of execution plans or plan handles, this could be a live trace, a previously saved trace, previously saved sqlplan files, from dm_exec_cached_plans,...(read more)

    Read the article

  • Evidence for automatic browsing - Log file analysis

    - by Nilani Algiriyage
    I'm analyzing web server logs both in Apache and IIS log formats. I want to find the evidence for automatic browsing, like web robots, spiders, bots, etc. I used python robot-detection 0.2.8 for detecting robots in my log files, but I know there may be other robots (automatic programs) which have traversed through the web site but robot-detection can not identify. So I want to ask: Are there any specific clues that can be found in log files that human users do not leave but automated software would? Do they follow a specific navigation pattern? I saw some requests for favicon.ico - does this implicate that it is a automatic browsing?. I found this article and this question with some valuable points.

    Read the article

  • Create SQL Server Analysis Services Partitions using AMO

    When you have SSAS cubes with millions of rows of data, it is very helpful to create partitions. If you have a few cubes you could probably do this manually, but if there are many or if you want to automate this process you should look for smarter solutions such as programming the creation of partitions dynamically. NEW! Never waste another weekend deployingDeploy SQL Server changes and ASP .NET applications fast, frequently, and without fuss, using Deployment Manager, the new tool from Red Gate. Try it now.

    Read the article

  • Download NDepend Analysis Tool

    - by Editor
    NDepend is a tool that simplifies managing a complex .NET code base. Architects and developers can analyze code structure, specify design rules, plan massive refactoring, do effective code reviews and master evolution by comparing different versions of the code. The result is better communication, improved quality, easier maintenance and faster development. NDepend supports the Code Query Language [...]

    Read the article

  • Web Design in 2010 - 2011: Analysis

    As we?re coming to the middle of this year, everyone is trying to analyze the recent trends in web designing and web Development. However, in this article, we?ll see what web designers and developers... [Author: Maryam Naqvi - Web Design and Development - June 09, 2010]

    Read the article

  • Compiling/executing Java on Sublime Text 2 works fine except that it cannot read user input

    - by meiryo
    I am a student learning Java and I want to compile and run some simple Java on ST2. Also Eclipse is very slow on my laptop. Here is my JavaC.sublime-build file so far: { "cmd": ["sublimejavaexec.bat", "$file"], "file_regex": "^(...*?):([0-9]*):?([0-9]*)", "selector": "source.java" } So far it can run code that does not require user input. However when I have something that uses the Java input scanner it either skips through or generates an error. Can anyone suggest a solution such as a plug-in or if ST2 actually has this kind of feature on its console? Thanks.

    Read the article

< Previous Page | 19 20 21 22 23 24 25 26 27 28 29 30  | Next Page >