Search Results

Search found 37122 results on 1485 pages for 'text analysis'.

Page 158/1485 | < Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >

  • Read JSON (text file) into .NET application

    - by Bi
    I have a configuration file in the following JSON format: { "key1": "value1", "key2": "value2", "key3": false, "key4": 10, } The user can set/unset the configuration values using a text editor. I however need to read it in my C# application. Whats the best way to do so for JSON? The above keys are not associated with a class.

    Read the article

  • cut text after (x) amount of characters

    - by Blackbird
    This is in wordpress (not sure that makes a difference) This bit of php outputs the post title <?php echo $data['nameofpost']; ?> It's simple text which can be anywhere up to 100 chars long. What i'd like is if the chars outputted are over 20 long to display '...' or simply nothing at all. Thanks

    Read the article

  • Accessing Text Input Field Data

    - by 01010011
    Hi, Using CodeIgniter, how do I access and display text entered into an input field on a view file (see code below) from my controller file? // input_view.php $input_data = array('name' => 'search_field', 'size' => '70'); echo form_input($input_data);

    Read the article

  • Search a full text index for 'c#'

    - by KKirk
    Hi I have a table (lets say it has one column called 'colLanguage') that contains a list of skills and has a full text index defined on it. One of the entries in the table is 'c#' but when I search for 'c#' (using the following SQL) I get no results back. select * from FREETEXTTABLE(tblList, colLanguage, 'c#') Can anyone help? Thanks K

    Read the article

  • How to find and fix performance problems in ORM powered applications

    - by FransBouma
    Once in a while we get requests about how to fix performance problems with our framework. As it comes down to following the same steps and looking into the same things every single time, I decided to write a blogpost about it instead, so more people can learn from this and solve performance problems in their O/R mapper powered applications. In some parts it's focused on LLBLGen Pro but it's also usable for other O/R mapping frameworks, as the vast majority of performance problems in O/R mapper powered applications are not specific for a certain O/R mapper framework. Too often, the developer looks at the wrong part of the application, trying to fix what isn't a problem in that part, and getting frustrated that 'things are so slow with <insert your favorite framework X here>'. I'm in the O/R mapper business for a long time now (almost 10 years, full time) and as it's a small world, we O/R mapper developers know almost all tricks to pull off by now: we all know what to do to make task ABC faster and what compromises (because there are almost always compromises) to deal with if we decide to make ABC faster that way. Some O/R mapper frameworks are faster in X, others in Y, but you can be sure the difference is mainly a result of a compromise some developers are willing to deal with and others aren't. That's why the O/R mapper frameworks on the market today are different in many ways, even though they all fetch and save entities from and to a database. I'm not suggesting there's no room for improvement in today's O/R mapper frameworks, there always is, but it's not a matter of 'the slowness of the application is caused by the O/R mapper' anymore. Perhaps query generation can be optimized a bit here, row materialization can be optimized a bit there, but it's mainly coming down to milliseconds. Still worth it if you're a framework developer, but it's not much compared to the time spend inside databases and in user code: if a complete fetch takes 40ms or 50ms (from call to entity object collection), it won't make a difference for your application as that 10ms difference won't be noticed. That's why it's very important to find the real locations of the problems so developers can fix them properly and don't get frustrated because their quest to get a fast, performing application failed. Performance tuning basics and rules Finding and fixing performance problems in any application is a strict procedure with four prescribed steps: isolate, analyze, interpret and fix, in that order. It's key that you don't skip a step nor make assumptions: these steps help you find the reason of a problem which seems to be there, and how to fix it or leave it as-is. Skipping a step, or when you assume things will be bad/slow without doing analysis will lead to the path of premature optimization and won't actually solve your problems, only create new ones. The most important rule of finding and fixing performance problems in software is that you have to understand what 'performance problem' actually means. Most developers will say "when a piece of software / code is slow, you have a performance problem". But is that actually the case? If I write a Linq query which will aggregate, group and sort 5 million rows from several tables to produce a resultset of 10 rows, it might take more than a couple of milliseconds before that resultset is ready to be consumed by other logic. If I solely look at the Linq query, the code consuming the resultset of the 10 rows and then look at the time it takes to complete the whole procedure, it will appear to me to be slow: all that time taken to produce and consume 10 rows? But if you look closer, if you analyze and interpret the situation, you'll see it does a tremendous amount of work, and in that light it might even be extremely fast. With every performance problem you encounter, always do realize that what you're trying to solve is perhaps not a technical problem at all, but a perception problem. The second most important rule you have to understand is based on the old saying "Penny wise, Pound Foolish": the part which takes e.g. 5% of the total time T for a given task isn't worth optimizing if you have another part which takes a much larger part of the total time T for that same given task. Optimizing parts which are relatively insignificant for the total time taken is not going to bring you better results overall, even if you totally optimize that part away. This is the core reason why analysis of the complete set of application parts which participate in a given task is key to being successful in solving performance problems: No analysis -> no problem -> no solution. One warning up front: hunting for performance will always include making compromises. Fast software can be made maintainable, but if you want to squeeze as much performance out of your software, you will inevitably be faced with the dilemma of compromising one or more from the group {readability, maintainability, features} for the extra performance you think you'll gain. It's then up to you to decide whether it's worth it. In almost all cases it's not. The reason for this is simple: the vast majority of performance problems can be solved by implementing the proper algorithms, the ones with proven Big O-characteristics so you know the performance you'll get plus you know the algorithm will work. The time taken by the algorithm implementing code is inevitable: you already implemented the best algorithm. You might find some optimizations on the technical level but in general these are minor. Let's look at the four steps to see how they guide us through the quest to find and fix performance problems. Isolate The first thing you need to do is to isolate the areas in your application which are assumed to be slow. For example, if your application is a web application and a given page is taking several seconds or even minutes to load, it's a good candidate to check out. It's important to start with the isolate step because it allows you to focus on a single code path per area with a clear begin and end and ignore the rest. The rest of the steps are taken per identified problematic area. Keep in mind that isolation focuses on tasks in an application, not code snippets. A task is something that's started in your application by either another task or the user, or another program, and has a beginning and an end. You can see a task as a piece of functionality offered by your application.  Analyze Once you've determined the problem areas, you have to perform analysis on the code paths of each area, to see where the performance problems occur and which areas are not the problem. This is a multi-layered effort: an application which uses an O/R mapper typically consists of multiple parts: there's likely some kind of interface (web, webservice, windows etc.), a part which controls the interface and business logic, the O/R mapper part and the RDBMS, all connected with either a network or inter-process connections provided by the OS or other means. Each of these parts, including the connectivity plumbing, eat up a part of the total time it takes to complete a task, e.g. load a webpage with all orders of a given customer X. To understand which parts participate in the task / area we're investigating and how much they contribute to the total time taken to complete the task, analysis of each participating task is essential. Start with the code you wrote which starts the task, analyze the code and track the path it follows through your application. What does the code do along the way, verify whether it's correct or not. Analyze whether you have implemented the right algorithms in your code for this particular area. Remember we're looking at one area at a time, which means we're ignoring all other code paths, just the code path of the current problematic area, from begin to end and back. Don't dig in and start optimizing at the code level just yet. We're just analyzing. If your analysis reveals big architectural stupidity, it's perhaps a good idea to rethink the architecture at this point. For the rest, we're analyzing which means we collect data about what could be wrong, for each participating part of the complete application. Reviewing the code you wrote is a good tool to get deeper understanding of what is going on for a given task but ultimately it lacks precision and overview what really happens: humans aren't good code interpreters, computers are. We therefore need to utilize tools to get deeper understanding about which parts contribute how much time to the total task, triggered by which other parts and for example how many times are they called. There are two different kind of tools which are necessary: .NET profilers and O/R mapper / RDBMS profilers. .NET profiling .NET profilers (e.g. dotTrace by JetBrains or Ants by Red Gate software) show exactly which pieces of code are called, how many times they're called, and the time it took to run that piece of code, at the method level and sometimes even at the line level. The .NET profilers are essential tools for understanding whether the time taken to complete a given task / area in your application is consumed by .NET code, where exactly in your code, the path to that code, how many times that code was called by other code and thus reveals where hotspots are located: the areas where a solution can be found. Importantly, they also reveal which areas can be left alone: remember our penny wise pound foolish saying: if a profiler reveals that a group of methods are fast, or don't contribute much to the total time taken for a given task, ignore them. Even if the code in them is perhaps complex and looks like a candidate for optimization: you can work all day on that, it won't matter.  As we're focusing on a single area of the application, it's best to start profiling right before you actually activate the task/area. Most .NET profilers support this by starting the application without starting the profiling procedure just yet. You navigate to the particular part which is slow, start profiling in the profiler, in your application you perform the actions which are considered slow, and afterwards you get a snapshot in the profiler. The snapshot contains the data collected by the profiler during the slow action, so most data is produced by code in the area to investigate. This is important, because it allows you to stay focused on a single area. O/R mapper and RDBMS profiling .NET profilers give you a good insight in the .NET side of things, but not in the RDBMS side of the application. As this article is about O/R mapper powered applications, we're also looking at databases, and the software making it possible to consume the database in your application: the O/R mapper. To understand which parts of the O/R mapper and database participate how much to the total time taken for task T, we need different tools. There are two kind of tools focusing on O/R mappers and database performance profiling: O/R mapper profilers and RDBMS profilers. For O/R mapper profilers, you can look at LLBLGen Prof by hibernating rhinos or the Linq to Sql/LLBLGen Pro profiler by Huagati. Hibernating rhinos also have profilers for other O/R mappers like NHibernate (NHProf) and Entity Framework (EFProf) and work the same as LLBLGen Prof. For RDBMS profilers, you have to look whether the RDBMS vendor has a profiler. For example for SQL Server, the profiler is shipped with SQL Server, for Oracle it's build into the RDBMS, however there are also 3rd party tools. Which tool you're using isn't really important, what's important is that you get insight in which queries are executed during the task / area we're currently focused on and how long they took. Here, the O/R mapper profilers have an advantage as they collect the time it took to execute the query from the application's perspective so they also collect the time it took to transport data across the network. This is important because a query which returns a massive resultset or a resultset with large blob/clob/ntext/image fields takes more time to get transported across the network than a small resultset and a database profiler doesn't take this into account most of the time. Another tool to use in this case, which is more low level and not all O/R mappers support it (though LLBLGen Pro and NHibernate as well do) is tracing: most O/R mappers offer some form of tracing or logging system which you can use to collect the SQL generated and executed and often also other activity behind the scenes. While tracing can produce a tremendous amount of data in some cases, it also gives insight in what's going on. Interpret After we've completed the analysis step it's time to look at the data we've collected. We've done code reviews to see whether we've done anything stupid and which parts actually take place and if the proper algorithms have been implemented. We've done .NET profiling to see which parts are choke points and how much time they contribute to the total time taken to complete the task we're investigating. We've performed O/R mapper profiling and RDBMS profiling to see which queries were executed during the task, how many queries were generated and executed and how long they took to complete, including network transportation. All this data reveals two things: which parts are big contributors to the total time taken and which parts are irrelevant. Both aspects are very important. The parts which are irrelevant (i.e. don't contribute significantly to the total time taken) can be ignored from now on, we won't look at them. The parts which contribute a lot to the total time taken are important to look at. We now have to first look at the .NET profiler results, to see whether the time taken is consumed in our own code, in .NET framework code, in the O/R mapper itself or somewhere else. For example if most of the time is consumed by DbCommand.ExecuteReader, the time it took to complete the task is depending on the time the data is fetched from the database. If there was just 1 query executed, according to tracing or O/R mapper profilers / RDBMS profilers, check whether that query is optimal, uses indexes or has to deal with a lot of data. Interpret means that you follow the path from begin to end through the data collected and determine where, along the path, the most time is contributed. It also means that you have to check whether this was expected or is totally unexpected. My previous example of the 10 row resultset of a query which groups millions of rows will likely reveal that a long time is spend inside the database and almost no time is spend in the .NET code, meaning the RDBMS part contributes the most to the total time taken, the rest is compared to that time, irrelevant. Considering the vastness of the source data set, it's expected this will take some time. However, does it need tweaking? Perhaps all possible tweaks are already in place. In the interpret step you then have to decide that further action in this area is necessary or not, based on what the analysis results show: if the analysis results were unexpected and in the area where the most time is contributed to the total time taken is room for improvement, action should be taken. If not, you can only accept the situation and move on. In all cases, document your decision together with the analysis you've done. If you decide that the perceived performance problem is actually expected due to the nature of the task performed, it's essential that in the future when someone else looks at the application and starts asking questions you can answer them properly and new analysis is only necessary if situations changed. Fix After interpreting the analysis results you've concluded that some areas need adjustment. This is the fix step: you're actively correcting the performance problem with proper action targeted at the real cause. In many cases related to O/R mapper powered applications it means you'll use different features of the O/R mapper to achieve the same goal, or apply optimizations at the RDBMS level. It could also mean you apply caching inside your application (compromise memory consumption over performance) to avoid unnecessary re-querying data and re-consuming the results. After applying a change, it's key you re-do the analysis and interpretation steps: compare the results and expectations with what you had before, to see whether your actions had any effect or whether it moved the problem to a different part of the application. Don't fall into the trap to do partly analysis: do the full analysis again: .NET profiling and O/R mapper / RDBMS profiling. It might very well be that the changes you've made make one part faster but another part significantly slower, in such a way that the overall problem hasn't changed at all. Performance tuning is dealing with compromises and making choices: to use one feature over the other, to accept a higher memory footprint, to go away from the strict-OO path and execute queries directly onto the RDBMS, these are choices and compromises which will cross your path if you want to fix performance problems with respect to O/R mappers or data-access and databases in general. In most cases it's not a big issue: alternatives are often good choices too and the compromises aren't that hard to deal with. What is important is that you document why you made a choice, a compromise: which analysis data, which interpretation led you to the choice made. This is key for good maintainability in the years to come. Most common performance problems with O/R mappers Below is an incomplete list of common performance problems related to data-access / O/R mappers / RDBMS code. It will help you with fixing the hotspots you found in the interpretation step. SELECT N+1: (Lazy-loading specific). Lazy loading triggered performance bottlenecks. Consider a list of Orders bound to a grid. You have a Field mapped onto a related field in Order, Customer.CompanyName. Showing this column in the grid will make the grid fetch (indirectly) for each row the Customer row. This means you'll get for the single list not 1 query (for the orders) but 1+(the number of orders shown) queries. To solve this: use eager loading using a prefetch path to fetch the customers with the orders. SELECT N+1 is easy to spot with an O/R mapper profiler or RDBMS profiler: if you see a lot of identical queries executed at once, you have this problem. Prefetch paths using many path nodes or sorting, or limiting. Eager loading problem. Prefetch paths can help with performance, but as 1 query is fetched per node, it can be the number of data fetched in a child node is bigger than you think. Also consider that data in every node is merged on the client within the parent. This is fast, but it also can take some time if you fetch massive amounts of entities. If you keep fetches small, you can use tuning parameters like the ParameterizedPrefetchPathThreshold setting to get more optimal queries. Deep inheritance hierarchies of type Target Per Entity/Type. If you use inheritance of type Target per Entity / Type (each type in the inheritance hierarchy is mapped onto its own table/view), fetches will join subtype- and supertype tables in many cases, which can lead to a lot of performance problems if the hierarchy has many types. With this problem, keep inheritance to a minimum if possible, or switch to a hierarchy of type Target Per Hierarchy, which means all entities in the inheritance hierarchy are mapped onto the same table/view. Of course this has its own set of drawbacks, but it's a compromise you might want to take. Fetching massive amounts of data by fetching large lists of entities. LLBLGen Pro supports paging (and limiting the # of rows returned), which is often key to process through large sets of data. Use paging on the RDBMS if possible (so a query is executed which returns only the rows in the page requested). When using paging in a web application, be sure that you switch server-side paging on on the datasourcecontrol used. In this case, paging on the grid alone is not enough: this can lead to fetching a lot of data which is then loaded into the grid and paged there. Keep note that analyzing queries for paging could lead to the false assumption that paging doesn't occur, e.g. when the query contains a field of type ntext/image/clob/blob and DISTINCT can't be applied while it should have (e.g. due to a join): the datareader will do DISTINCT filtering on the client. this is a little slower but it does perform paging functionality on the data-reader so it won't fetch all rows even if the query suggests it does. Fetch massive amounts of data because blob/clob/ntext/image fields aren't excluded. LLBLGen Pro supports field exclusion for queries. You can exclude fields (also in prefetch paths) per query to avoid fetching all fields of an entity, e.g. when you don't need them for the logic consuming the resultset. Excluding fields can greatly reduce the amount of time spend on data-transport across the network. Use this optimization if you see that there's a big difference between query execution time on the RDBMS and the time reported by the .NET profiler for the ExecuteReader method call. Doing client-side aggregates/scalar calculations by consuming a lot of data. If possible, try to formulate a scalar query or group by query using the projection system or GetScalar functionality of LLBLGen Pro to do data consumption on the RDBMS server. It's far more efficient to process data on the RDBMS server than to first load it all in memory, then traverse the data in-memory to calculate a value. Using .ToList() constructs inside linq queries. It might be you use .ToList() somewhere in a Linq query which makes the query be run partially in-memory. Example: var q = from c in metaData.Customers.ToList() where c.Country=="Norway" select c; This will actually fetch all customers in-memory and do an in-memory filtering, as the linq query is defined on an IEnumerable<T>, and not on the IQueryable<T>. Linq is nice, but it can often be a bit unclear where some parts of a Linq query might run. Fetching all entities to delete into memory first. To delete a set of entities it's rather inefficient to first fetch them all into memory and then delete them one by one. It's more efficient to execute a DELETE FROM ... WHERE query on the database directly to delete the entities in one go. LLBLGen Pro supports this feature, and so do some other O/R mappers. It's not always possible to do this operation in the context of an O/R mapper however: if an O/R mapper relies on a cache, these kind of operations are likely not supported because they make it impossible to track whether an entity is actually removed from the DB and thus can be removed from the cache. Fetching all entities to update with an expression into memory first. Similar to the previous point: it is more efficient to update a set of entities directly with a single UPDATE query using an expression instead of fetching the entities into memory first and then updating the entities in a loop, and afterwards saving them. It might however be a compromise you don't want to take as it is working around the idea of having an object graph in memory which is manipulated and instead makes the code fully aware there's a RDBMS somewhere. Conclusion Performance tuning is almost always about compromises and making choices. It's also about knowing where to look and how the systems in play behave and should behave. The four steps I provided should help you stay focused on the real problem and lead you towards the solution. Knowing how to optimally use the systems participating in your own code (.NET framework, O/R mapper, RDBMS, network/services) is key for success as well as knowing what's going on inside the application you built. I hope you'll find this guide useful in tracking down performance problems and dealing with them in a useful way.  

    Read the article

  • Big Data – Is Big Data Relevant to me? – Big Data Questionnaires – Guest Post by Vinod Kumar

    - by Pinal Dave
    This guest post is by Vinod Kumar. Vinod Kumar has worked with SQL Server extensively since joining the industry over a decade ago. Working on various versions of SQL Server 7.0, Oracle 7.3 and other database technologies – he now works with the Microsoft Technology Center (MTC) as a Technology Architect. Let us read the blog post in Vinod’s own voice. I think the series from Pinal is a good one for anyone planning to start on Big Data journey from the basics. In my daily customer interactions this buzz of “Big Data” always comes up, I react generally saying – “Sir, do you really have a ‘Big Data’ problem or do you have a big Data problem?” Generally, there is a silence in the air when I ask this question. Data is everywhere in organizations – be it big data, small data, all data and for few it is bad data which is same as no data :). Wow, don’t discount me as someone who opposes “Big Data”, I am a big supporter as much as I am a critic of the abuse of this term by the people. In this post, I wanted to let my mind flow so that you can also think in the direction I want you to see these concepts. In any case, this is not an exhaustive dump of what is in my mind – but you will surely get the drift how I am going to question Big Data terms from customers!!! Is Big Data Relevant to me? Many of my customers talk to me like blank whiteboard with no idea – “why Big Data”. They want to jump into the bandwagon of technology and they want to decipher insights from their unexplored data a.k.a. unstructured data with structured data. So what are these industry scenario’s that come to mind? Here are some of them: Financials Fraud detection: Banks and Credit cards are monitoring your spending habits on real-time basis. Customer Segmentation: applies in every industry from Banking to Retail to Aviation to Utility and others where they deal with end customer who consume their products and services. Customer Sentiment Analysis: Responding to negative brand perception on social or amplify the positive perception. Sales and Marketing Campaign: Understand the impact and get closer to customer delight. Call Center Analysis: attempt to take unstructured voice recordings and analyze them for content and sentiment. Medical Reduce Re-admissions: How to build a proactive follow-up engagements with patients. Patient Monitoring: How to track Inpatient, Out-Patient, Emergency Visits, Intensive Care Units etc. Preventive Care: Disease identification and Risk stratification is a very crucial business function for medical. Claims fraud detection: There is no precise dollars that one can put here, but this is a big thing for the medical field. Retail Customer Sentiment Analysis, Customer Care Centers, Campaign Management. Supply Chain Analysis: Every sensors and RFID data can be tracked for warehouse space optimization. Location based marketing: Based on where a check-in happens retail stores can be optimize their marketing. Telecom Price optimization and Plans, Finding Customer churn, Customer loyalty programs Call Detail Record (CDR) Analysis, Network optimizations, User Location analysis Customer Behavior Analysis Insurance Fraud Detection & Analysis, Pricing based on customer Sentiment Analysis, Loyalty Management Agents Analysis, Customer Value Management This list can go on to other areas like Utility, Manufacturing, Travel, ITES etc. So as you can see, there are obviously interesting use cases for each of these industry verticals. These are just representative list. Where to start? A lot of times I try to quiz customers on a number of dimensions before starting a Big Data conversation. Are you getting the data you need the way you want it and in a timely manner? Can you get in and analyze the data you need? How quickly is IT to respond to your BI Requests? How easily can you get at the data that you need to run your business/department/project? How are you currently measuring your business? Can you get the data you need to react WITHIN THE QUARTER to impact behaviors to meet your numbers or is it always “rear-view mirror?” How are you measuring: The Brand Customer Sentiment Your Competition Your Pricing Your performance Supply Chain Efficiencies Predictive product / service positioning What are your key challenges of driving collaboration across your global business?  What the challenges in innovation? What challenges are you facing in getting more information out of your data? Note: Garbage-in is Garbage-out. Hold good for all reporting / analytics requirements Big Data POCs? A number of customers get into the realm of setting a small team to work on Big Data – well it is a great start from an understanding point of view, but I tend to ask a number of other questions to such customers. Some of these common questions are: To what degree is your advanced analytics (natural language processing, sentiment analysis, predictive analytics and classification) paired with your Big Data’s efforts? Do you have dedicated resources exploring the possibilities of advanced analytics in Big Data for your business line? Do you plan to employ machine learning technology while doing Advanced Analytics? How is Social Media being monitored in your organization? What is your ability to scale in terms of storage and processing power? Do you have a system in place to sort incoming data in near real time by potential value, data quality, and use frequency? Do you use event-driven architecture to manage incoming data? Do you have specialized data services that can accommodate different formats, security, and the management requirements of multiple data sources? Is your organization currently using or considering in-memory analytics? To what degree are you able to correlate data from your Big Data infrastructure with that from your enterprise data warehouse? Have you extended the role of Data Stewards to include ownership of big data components? Do you prioritize data quality based on the source system (that is Facebook/Twitter data has lower quality thresholds than radio frequency identification (RFID) for a tracking system)? Do your retention policies consider the different legal responsibilities for storing Big Data for a specific amount of time? Do Data Scientists work in close collaboration with Data Stewards to ensure data quality? How is access to attributes of Big Data being given out in the organization? Are roles related to Big Data (Advanced Analyst, Data Scientist) clearly defined? How involved is risk management in the Big Data governance process? Is there a set of documented policies regarding Big Data governance? Is there an enforcement mechanism or approach to ensure that policies are followed? Who is the key sponsor for your Big Data governance program? (The CIO is best) Do you have defined policies surrounding the use of social media data for potential employees and customers, as well as the use of customer Geo-location data? How accessible are complex analytic routines to your user base? What is the level of involvement with outside vendors and third parties in regard to the planning and execution of Big Data projects? What programming technologies are utilized by your data warehouse/BI staff when working with Big Data? These are some of the important questions I ask each customer who is actively evaluating Big Data trends for their organizations. These questions give you a sense of direction where to start, what to use, how to secure, how to analyze and more. Sign off Any Big data is analysis is incomplete without a compelling story. The best way to understand this is to watch Hans Rosling – Gapminder (2:17 to 6:06) videos about the third world myths. Don’t get overwhelmed with the Big Data buzz word, the destination to what your data speaks is important. In this blog post, we did not particularly look at any Big Data technologies. This is a set of questionnaire one needs to keep in mind as they embark their journey of Big Data. I did write some of the basics in my blog: Big Data – Big Hype yet Big Opportunity. Do let me know if these questions make sense?  Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • 24+ Coda Alternatives for Windows and Linux

    - by Matt
    Coda plays an important role in designing layout on Mac. There are numerous coda alternatives for windows and Linux too. It is not possible to describe each and everyone so some of the coda alternatives, which work on both windows and Linux platforms, are discussed below. EditPlus $35.00 Good thing about EditPlus is that it highlights URLs and email addresses, activating them when you ‘crtl + double-click’. It also has a built in browser for previewing HTML, and FTP and SFTP support. Also supports Macros and RegEx find and replace. UltraEdit $49.99 It is another good coda alternative for windows and Linux. It is the best suited editor for text, HTML and HEX. It also plays an advanced PHP, Perl, Java and JavaScript editor for programmers. It supports disk-based 64-bit or standard file handling on 32-bit Windows platforms or window 2000 and later versions. HippoEdit $39.95 HippoEDIT has the best autocomplete it gives pop a ‘tooltip’ above your cursor as you type, suggesting words you’ve already typed. It does syntax highlighting for over 2 dozen language. Sublime Text $59.00 Sublime Text awesome ‘zoomed out’ view of the file lets you focus on the area you want. It lets you open a local file when you right-click on its link, and there are a few automation features, so this would make a solid choice of a text editor. Textpad $24.70 TextPad is simple editor with nifty features such as column select, drag-and-drop text between files, and hyperlink support. It also supports large files. Aptana Free Aptana Studio is one of the best editors working on both windows and Linux. It is a complete web development setting that has a nice blend of powerful authoring tools with a collection of online hosting and collaboration services. It is quite helpful as it support for PHP, CSS, FTP, and more. SciTE Free It is a SCIntilla based Text Editor. It has gradually developed as a generally useful editor. It provides for building and running programs. It is best to be used for jobs with simple configurations. SciTE is currently available for Intel Win32 and Linux compatible operating systems with GTK+. It has been run on Windows XP and on Fedora 8 and Ubuntu 7.10 with GTK+ 2.12 E Text Editor $34.96 E Text Editor is a new text editor for Windows, which also works on Linux as well. It has powerful editing features and also some unique abilities. It makes text manipulation quite fast and easy, and makes user focus on his writing as it automatically does all the manual work. It can be extend it in any language. It supports Text Mate bundles, thus allows the user to tap into a huge and active community. Editra Free Editra is an upcoming editor, with some fantastic features such as user profiles, auto-completion, session saving, and syntax highlighing for 60+ languages. Plugins can extend the feature set, offering an integrated python console, FTP client, file browser, and calculator, among others. PSPad Free PSPad is a good Template for writing CSS, as it an internal web browser, and a macro recorder to the table. It also supports hex editing, and some degree of code compiling. JEdit Free It is a mature programmer’s text editor and has taken a good deal of time to be developed as it is today. It is better than many costlier development tools due to its features and simplicity of use. It has been released as free software with full source code, provided under the terms of the GPL 2.0. Which also adds to its attractiveness. NEdit Free It is a multi-purpose text editor for the X Window System, which also works on Linux. It combines a standard, easy to use, graphical user interface with the full functionality and stability required by users who edit text for long period a day. It also provides for thorough support for development in various languages. It also facilitates the use of text processors, and other tools at the same time. It can be used productively by anyone who needs to edit text. It is quite a user-friendly tool. Its salient features include syntax highlighting with built in pattern, auto indent, tab emulation, block indentation adjustment etc. As of version 5.1, NEdit may be freely distributed under the terms of the GNU General Public License. MadEdit Free Mad Edit is an Open-Source and Cross-Platform Text/Hex Editor. It is written in C++ and wxWidgets. MadEdit can edit files in Text/Column/Hex modes. It also supports many useful functions, such as Syntax Highlighting, Word Wrap, Encoding for UTF8/16/32,and others. It also supports word count, which makes it quite a useful text editor for both windows and Linux. It has been recently modified on 10/09/2010. KompoZer Free Kompozer is a complete web authoring system that has a combination of web file management and easy-to-use WYSIWYG web page editing. KompoZer has been designed to be completely and extensively easy to use. It is thus an ideal tool for non-technical computer users who want to create an attractive, professional-looking web site without knowing HTML or web coding. It is based on the NVU source code. Vim Free Vim or “Vi IMproved” is an advanced text editor. Its salient features are syntax highlighting, word completion and it also has a huge amount of contributed content. Vim has several “modes” on offer for editing, which adds to the efficiency in editing. Thus it becomes a non-user-friendly application but it is also strength for its users. The normal mode binds alphanumeric keys to task-oriented commands. The visual mode highlights text. More tools for search & replace, defining functions, etc. are offered through command line mode. Vim comes with complete help. NotePad ++ Free One of the the best free text editor for Windows out there; with support for simple things—like syntax highlighting and folding—all the way up to FTP, Notepad++ should tick most of the boxes Notepad2 Free Notepad2 is also based on the Scintilla editing engine, but it’s much simpler than Notepad++. It bills itself as being fast, light-weight, and Notepad-like. Crimson Editor Free Crimson Editor has the ability to edit remote files, using a built-in FTP client; there’s also a spell checker. TotalEdit Free TotalEdit allows file comparison, RegEx search and replace, and has multiple options for file backup / versioning. For cleanup, it offers (X)HTML and XML customizable formatting, and a spell checker. In-Type Free ConTEXT Free SourceEdit Free SourceEdit includes features such as clipboard history, syntax highlighting and autocompletion for a decent set of languages. A hex editor and FTP client. RJ TextED Free RJ TextED supports integration with TopStyle Lite. Provides HTML validation and formatting. It includes an FTP client, a file browser, and a code browser, as well as a character map and support for email. GEDIT Free It is one of the best coda alternatives for windows and Linux. It has syntax highlighting and is best suitable for programming. It has many attractive features such as full support for UTF-8, undo/redo, and clipboard support, search and replace, configurable syntax highlighting for various languages and many more supportive features. It is extensible with plug ins. Other important coda alternatives for windows and Linux are Redcar, Bluefish Editor, NVU, Ruby Mine, Slick Edit, Geany, Editra, txt2html and CSSED. There are many more. Its up to user to decide which one suits best to his requirements. Related posts:10 Useful Text Editor For Developer Applications to Install & Run Windows on Linux Open Source WYSIWYG Text Editors

    Read the article

  • Update MySQL table using data from a text file through Java

    - by Karthi Karthi
    I have a text file with four lines, each line contains comma separated values like below file My file is: Raj,[email protected],123455 kumar,[email protected],23453 shilpa,[email protected],765468 suraj,[email protected],876567 and I have a MySQL table which contains four fields firstname lastname email phno ---------- ---------- --------- -------- Raj babu [email protected] 2343245 kumar selva [email protected] 23453 shilpa murali [email protected] 765468 suraj abd [email protected] 876567 Now I want to update my table using the data in the above text file through Java. I have tried using bufferedReader to read from the file and used split method using comma as delimiter and stored it in array. But it is not working. Any help appreciated. This is what I have tried so far void readingFile() { try { File f1 = new File("TestFile.txt"); FileReader fr = new FileReader(f1); BufferedReader br = new BufferedReader(fr); String strln = null; strln = br.readLine(); while((strln=br.readLine())!=null) { // System.out.println(strln); arr = strln.split(","); strfirstname = arr[0]; strlastname = arr[1]; stremail = arr[2]; strphno = arr[3]; System.out.println(strfirstname + " " + strlastname + " " + stremail +" "+ strphno); } // for(String i : arr) // { // } br.close(); fr.close(); } catch(IOException e) { System.out.println("Cannot read from File." + e); } try { st = conn.createStatement(); String query = "update sampledb set email = stremail,phno =strphno where firstname = strfirstname "; st.executeUpdate(query); st.close(); System.out.println("sampledb Table successfully updated."); } catch(Exception e3) { System.out.println("Unable to Update sampledb table. " + e3); } } and the output i got is: Ganesh Pandiyan [email protected] 9591982389 Dass Jeyan [email protected] 9689523645 Exception in thread "main" java.lang.ArrayIndexOutOfBoundsException: 1 Gowtham Selvan [email protected] 9894189423 at TemporaryPackages.FileReadAndUpdateTable.readingFile(FileReadAndUpdateTable.java:35) at TemporaryPackages.FileReadAndUpdateTable.main(FileReadAndUpdateTable.java:72) Java Result: 1 @varadaraj: This is the code of yours.... String stremail,strphno,strfirstname,strlastname; // String[] arr; Connection conn; Statement st; void readingFile() { try { BufferedReader bReader= new BufferedReader(new FileReader("TestFile.txt")); String fileValues; while ((fileValues = bReader.readLine()) != null) { String[] values=fileValues .split(","); strfirstname = values[0]; // strlastname = values[1]; stremail = values[1]; strphno = values[2]; System.out.println(strfirstname + " " + strlastname + " " + stremail +" "+ strphno); } bReader.close(); } catch (IOException e) { System.out.println("File Read Error"); } // for(String i : arr) // { // } try { st = conn.createStatement(); String query = "update sampledb set email = stremail,phno =strphno where firstname = strfirstname "; st.executeUpdate(query); st.close(); System.out.println("sampledb Table successfully updated."); } catch(Exception e3) { System.out.println("Unable to Update sampledb table. " + e3); } }

    Read the article

  • Selective emboldeing of text in a webpage

    - by Eknath Iyer
    while printing out utf-8 characters onto a webpage, if encapsulate them with they get emboldened, but anything else, the page turns blank. Why? def main(): print "Content-type: text/html\r\n\r\n"; print '<html>' print '<head>' print '<style type="text/css">' print '.highlight { background-color: yellow }' print '.color1 { color: green; }' print '.color2 { color: blue; }' print '.color3 { color: purple; }' print '.color4 { color: red; }' print '.color5 { color: teal; }' print '.color6 { color: yellow; }' print '.color7 { color: orange; }' print '.color8 { color: violet; }' print '</style></head>' print '<body>' form = cgi.FieldStorage() ch = form.getvalue('choice') if ch == 'English': in_sent = form.getvalue('f1') in_sent = in_sent.lower() cho=0 elif ch == 'Hindi': in_sent = trans_he(form.getvalue('transl1').decode("utf-8")).strip() cho=1 #cho = 0 for english #cho = 1 for hindi adict=[] print '<center><u> User Input Sentence ==> <b>', in_sent,'</b></u></center><br>' in_sent=in_sent.strip().split(' ') colordict={} counter=1 for word in in_sent: colordict[word]=counter counter = counter + 1 f = open('bidirectional.alignment.txt','rb').read() records=f.strip().split('\n\n\n') for record in records: el=[] el2 = [] #basic file processing is done here. record = record.strip().split('\n') source = record[cho] target = record[(cho+1)%2] source_sent = source.split(' # ')[1] target_sent = target.split(' # ')[1] source_words = source_sent.strip().split(' ') target_words = target_sent.strip().split(' ') trans_index = source.split(' # ')[2].strip().split(' ') for word in in_sent: if word in source_words: if int(trans_index[source_words.index(word)]) > 0: tword=target_words[(int(trans_index[source_words.index(word)])-1)] target_sent = target_sent.replace(tword+' ','<b>'+tword+' </b>') # When the <b> tag is used here(for the 'target_sent = ...' statement). it is fine. But when <b> is replaced by something like in the next line or even <i> or <u>, it doesn't show an output at all source_sent = source_sent.replace(word+' ','<span class="color1">'+word+' </span>') el2.append(source_sent) el2.append(target_sent) el.append(target_sent.count('<b>')) el.append(el2) if target_sent.count('<b>') > 0: adict.append(el) print '<table><tr><td><center><h1>SOURCE LANGUAGE</h1></center></td><td><center> <h1>TARGET LANGUAGE</h1></center></td></tr>' for entry in adict: print '<tr><td>',entry[1][0],'</td><td>',trans_eh(entry[1][1]).encode("utf-8"),'</td> </tr>' print '</table></body>' print '</html>' main()

    Read the article

  • Selecting and inserting text at cursor location in textfield with JS/jQuery

    - by IceCreamYou
    Hello. I have developed a system in PHP which processes #hashtags like on Twitter. Now I'm trying to build a system that will suggest tags as I type. When a user starts writing a tag, a drop-down list should appear beneath the textarea with other tags that begin with the same string. Right now, I have it working where if a user types the hash key (#) the list will show up with the most popular #hashtags. When a tag is clicked, it is inserted at the end of the text in the textarea. I need the tag to be inserted at the cursor location instead. Here's my code; it operates on a textarea with class "facebook_status_text" and a div with class "fbssts_floating_suggestions" that contains an unordered list of links. (Also note that the syntax [#my tag] is used to handle tags with spaces.) maxlength = 140; var dest = $('.facebook_status_text:first'); var fbssts_box = $('.fbssts_floating_suggestions'); var fbssts_box_orig = fbssts_box.html(); dest.keyup(function(fbss_key) { if (fbss_key.which == 51) { fbssts_box.html(fbssts_box_orig); $('.fbssts_floating_suggestions .fbssts_suggestions a').click(function() { var tag = $(this).html(); //This part is not well-optimized. if (tag.match(/W/)) { tag = '[#'+ tag +']'; } else { tag = '#'+ tag; } var orig = dest.val(); orig = orig.substring(0, orig.length - 1); var last = orig.substring(orig.length - 1); if (last == '[') { orig = orig.substring(0, orig.length - 1); } //End of particularly poorly optimized code. dest.val(orig + tag); fbssts_box.hide(); dest.focus(); return false; }); fbssts_box.show(); fbssts_box.css('left', dest.offset().left); fbssts_box.css('top', dest.offset().top + dest.outerHeight() + 1); } else if (fbss_key.which != 16) { fbssts_box.hide(); } }); dest.blur(function() { var t = setTimeout(function() { fbssts_box.hide(); }, 250); }); When the user types, I also need get the 100 characters in the textarea before the cursor, and pass it (presumably via POST) to /fbssts/load/tags. The PHP back-end will process this, figure out what tags to suggest, and print the relevant HTML. Then I need to load that HTML into the .fbssts_floating_suggestions div at the cursor location. Ideally, I'd like to be able to do this: var newSuggestions = load('/fbssts/load/tags', {text: dest.getTextBeforeCursor()}); fbssts_box.html(fbssts_box_orig); $('.fbssts_floating_suggestions .fbssts_suggestions a').click(function() { var tag = $(this).html(); if (tag.match(/W/)) { tag = tag +']'; } dest.insertAtCursor(tag); fbssts_box.hide(); dest.focus(); return false; }); And here's the regex I'm using to identify tags (and @mentions) in the PHP back-end, FWIW. %(\A(#|@)(\w|(\p{L}\p{M}?))+\b)|((?<=\s)(#|@)(\w|(\p{L}\p{M}?))+\b)|(\[(#|@).+?\])%u Right now, my main hold-up is dealing with the cursor location. I've researched for the last two hours, and just ended up more confused. I would prefer a jQuery solution, but beggars can't be choosers. Thanks!

    Read the article

  • Adding to database with multiple text boxes

    - by kira423
    What I am trying to do with this script is allow users to update a url for their websites, and since each user isn't going to have the same amount of websites is is hard for me to just add $_POST['website'] for each of these. Here is the script <?php include("config.php"); include("header.php"); include("functions.php"); if(!isset($_SESSION['username']) && !isset($_SESSION['password'])){ header("Location: pubs.php"); } $getmember = mysql_query("SELECT * FROM `publishers` WHERE username = '".$_SESSION['username']."'"); $info = mysql_fetch_array($getmember); $getsites = mysql_query("SELECT * FROM `websites` WHERE publisher = '".$info['username']."'"); $postback = $_POST['website']; $webname = $_POST['webid']; if($_POST['submit']){ var_dump( $_POST['website'] ); $update = mysql_query("UPDATE `websites` SET `postback` = '$postback' WHERE name = '$webname'"); } print" <div id='center'> <span id='tools_lander'><a href='export.php'>Export Campaigns</a></span> <div id='calendar_holder'> <h3>Please define a postback for each of your websites below. The following variables should be used when creating your postback.<br /> cid = Campaign ID<br /> sid = Sub ID<br /> rate = Campaign Rate<br /> status = Status of Lead. 1 means payable 2 mean reversed<br /> A sample postback URL would be <br /> http://www.example.com/postback.php?cid=#cid&sid=#sid&rate=#rate&status=#status</h3> <table class='balances' align='center'> <form method='POST' action=''>"; while($website = mysql_fetch_array($getsites)){ print" <tr> <input type ='hidden' name='webid' value='".$website['id']."' /> <td style='font-weight:bold;'>".$website['name']."'s Postback:</td> <td><input type='text' style='width:400px;' name='website[]' value='".$website['postback']."' /></td> </tr>"; } print" <td style='float:right;position:relative;left:150px;'><input type='submit' name='submit' style='font-size:15px;height:30px;width:100px;' value='Submit' /></td> </form> </table> </div>"; include("footer.php"); ?> What I am attempting to do insert the what is inputted in the text boxes to their corresponding websites, and I cannot think of any other way to do it, and this obviously does not works and returns a notice stating Array to string conversion If there is a more logical way to do this please let me know.

    Read the article

  • How to allow to allow admins to edit my app's config files without UAC elevation?

    - by Justin Grant
    My company produces a cross-platform server application which loads its configuration from user-editable configuration files. On Windows, config file ACLs are locked down by our Setup program to allow reading by all users but restrict editing to Administrators and Local System only. Unfortunately, on Windows Server 2008, even local administrators no longer have admin privileges (because of UAC) unless they're running an elevated app. This has caused complaints from users who cannot use their favorite text editor to open and save config files changes-- they can open the files (since anyone can read) but can't save. Anyone have recommendations for what we can do (if anything) in our app's Setup to make editing easier for admins on Windows Server 2008? Related questions: if a Windows Server 2008 admin wants to edit an admins-only config file, how does he normally do it? Is he forced to use a text editor which is smart enough to auto-elevate when elevation is needed, like Windows Explorer does in response to access denied errors? Does he launch the editor from an elevated command-prompt window? Something else?

    Read the article

  • Dojo Widgets not rendering when returned in response to XhrPost

    - by lazynerd
    I am trying to capture the selected item in a Dijit Tree widget to render remaining part of the web page. Here is the code that captures the selected item and sends it to Django backend: <div dojoType="dijit.Tree" id="leftTree" store="leftTreeStore" childrenAttr="folders" query="{type:'folder'}" label="Explorer"> <script type="dojo/method" event="onClick" args="item"> alert("Execute of node " + termStore.getLabel(item)); var xhrArgs = { url: "/load-the-center-part-of-page", handleAs: "text", postData: dojo.toJson(leftTreeStore.getLabel(item), true), load: function(data) { dojo.byId("centerPane").innerHTML = data; //window.location = data; }, error: function(error) { dojo.byId("centerPane").innerHTML = "<p>Error in loading...</p>"; } } dojo.byId("centerPane").innerHTML = "<p>Loading...</p>"; var deferred = dojo.xhrPost(xhrArgs); </script> </div> The remaining part of the page contains HTML code with dojo widgets. This is the code sent back as 'response' to the select item event. Here is a snippet: <div dojoType="dijit.layout.TabContainer" id="tabs" jsId="tabs"> <div dojoType="dijit.layout.BorderContainer" title="Dashboard"> <div dojoType="dijit.layout.ContentPane" region="bottom"> first tab </div> </div> <div dojoType="dijit.layout.BorderContainer" title="Compare"> <div dojoType="dijit.layout.ContentPane" region="bottom"> Second Tab </div> </div> </div> It renders this html 'response' but without the dojo widgets. Is handleAs: "text" in XhrPost the culprit here?

    Read the article

  • Awk filtering values between two files when regions intersect (any solutions welcome)

    - by user964689
    This is building upon an earlier question Awk conditional filter one file based on another (or other solutions) I have an awk program that outputs a column from rows in a text file 'refGene.txt if values in that row match 2 out of 3 values in another text file. I need to include an additional criteria for finding a match between the two files. The criteria is inclusion if the range of the 2 numberical values specified in each row in file 1 overlap with the range of the two values in a row in refGene.txt. An example of a line in File 1: chr1 10 20 chr2 10 20 and an example line in file 2(refGene.txt) of the matching columns ($3, $5, $ 6): chr1 5 30 Currently the awk program does not treat this as a match because although the first column matches neither the 2nd or 3rd columns do no. But I would like a way to treat this as a match because the region 10-20 in file 1 is WITHIN the range of 5-30 in refGene.txt. However the second line in file 1 should NOT match because the first column does not match, which is necessary. If there is a way to include cases when any of the range in file 1 overlaps with any of the range in refGene.txt that would be really helpful. It should also replace the below conditional statements as it would also find all the cases currently described below. Please let me know if my question is unclear. Any help is really appreciated, thanks it advance! (solutions do not have to be in awk) Rubal FILES=/files/*txt for f in $FILES ; do awk ' BEGIN { FS = "\t"; } FILENAME == ARGV[1] { pair[ $1, $2, $3 ] = 1; next; } { if ( pair[ $3, $5, $6 ] == 1 ) { print $13; } } ' $(basename $f) /files/refGene.txt > /files/results/$(basename $f) ; done

    Read the article

  • find word and score based on positions

    - by ryder1211212
    hey guys i have a textfile i have divided it into 4 parts. i want to search each part for the words that appear in each part and score that word exmaple welcome to the national basketball finals,the basketball teams here today have come a long way. without much delay lets play basketball. i will want to return national = 1 as it appears only in one part etc am working on determining text context using word position. am working with c# and not very good in text processing basically if a word appears in the 4 sections it scores 4 if a word appears in the 3 sections it scores 3 if a word appears in the 2 sections it scores 2 if a word appears in the 1 section it scores 1 thanks in advance so far i have this var s = "welcome to the national basketball finals,the basketball teams here today have come a long way. without much delay lets play basketball. "; var numberOfParts = 4; var eachPartLength = s.Length / numberOfParts; var parts = new List<string>(); var words = Regex.Split(s, @"\W").Where(w => w.Length > 0); // this splits all words, removes empty strings var wordsIndex = 0; for (int i = 0; i < numberOfParts; i++) { var sb = new StringBuilder(); while (sb.Length < eachPartLength && wordsIndex < words.Count()) { sb.AppendFormat("{0} ", words.ElementAt(wordsIndex)); wordsIndex++; } // here you have the part Response.Write("[{0}]"+ sb); parts.Add(sb.ToString()); var allwords = parts.SelectMany(p => p.Split(' ').Distinct()); var wordsInAllParts = allwords.Where(w => parts.All(p => p.Contains(w))).Distinct();

    Read the article

< Previous Page | 154 155 156 157 158 159 160 161 162 163 164 165  | Next Page >