Search Results

Search found 483 results on 20 pages for 'multidimensional'.

Page 18/20 | < Previous Page | 14 15 16 17 18 19 20  | Next Page >

  • How to represent a list of points in R

    - by Guido
    I am working with a large list of points (each point has three dimensions x,y,z). I am pretty new with R, so I would like to know what is the best way to represent that kind of information. As far as I know, an array allows me to represent any multidimensional data, so currently I am using: > points<-array( c(1,2,0,1,3,0,2,4,0,2,5,0,2,7,0,3,8,0), dim=c(3,6) ) > points [,1] [,2] [,3] [,4] [,5] [,6] [1,] 1 1 2 2 2 3 -- x dim [2,] 2 3 4 5 7 8 -- y dim [3,] 0 0 0 0 0 0 -- z dim The aim is to perform some computations to calculate the euclidean distance between two sets of points (any hint in this sense would also be highly appreciated)

    Read the article

  • Create a PHP cache system in MySQL database?

    - by Zach Smith
    I'm creating a web service that often scrapes data from remote web pages. After scraping this data, I have a simple multidimensional array of information to use. The scraping process is fairly taxing on my server, and the page load takes a while. I was considering adding a simple cache system using a MySQL database, where I create one row per remote web page with a the array of information pulled from it stored as a JSON encoded string. Is this a good enough system? Or would something like a text file per web page be a better idea?

    Read the article

  • Library of ALL countries and their states for address forms

    - by Chris
    I need a library that includes all countries and respective codes in 1 array and a second multidimensional array for country to its states and a last array for state code to state name. I found tons of matches for similar requests but none actually have a simple CSV or XML with just that. My goal is to build an address form where you select your country and it then pre-populates the state dropdown - if the country has states - with the country's states. Shouldn't be so hard to find, but I guess i'm blind. Thanks so much for all help!

    Read the article

  • What's the difference between FireBug's conole.log() and console.debug() ?

    - by 6bytes
    A very simple code to illustrate the difference. var x = [0, 3, 1, 2]; console.debug('debug', x); console.log('log', x); // above display the same result x.splice(1, 2); // below display kind of a different result console.debug('debug', x); console.log('log', x); The javascript value is exactly the same but console.log() displays it a bit differently than before applying splice() method. Because of this I lost quite a few hours as I thought splice is acting funny making my array multidimensional or something. I just want to know why does this work like that. Does anyone know? :)

    Read the article

  • SSAS Tabular Workshop online and other upcoming dates (and updates!) #ssas #tabular

    - by Marco Russo (SQLBI)
    After many conferences and travels, this summer I had some time to write and prepare new sessions for the next wave of conferences. In reality I am just doing that, even if I already restarted traveling for consulting and training. So expect new content about DAX and Tabular coming in the next months! Starting to see real customer adopting Tabular is showing many new challenges and there is still a lot to learn and to create. If you still didn’t started working on Tabular, well, you should. As I always say, as a BI developer you should be able to choose between Tabular and Multidimensional, and in order to do that you should know both of them! One thing that I don’t like very much about marketing is that “Tabular is simpler”, because it’s often translated in “Tabular is for simple projects” when this last statement is not true. Actually, I see a lot of good reasons to adopt Tabular in complex data models, especially in non-traditional scenarios. I know, this is because I love to understand what are the actual limits of a technology, and I’m learning that there is simple a lot of space of improvement also for Tabular. It’s already fast, but it could be faster! How can you start? Well, first of all, by reading our book. Then, by attending to our SSAS Tabular workshop. There is an online edition of the workshop on September 3-4, 2012 (hurry up if you want to register), and there are already several dates planned for the next months (and others will be added soon!). And, of course, by installing SQL Server 2012 and trying to create models over your databases. If you are too lazy, just start with PowerPivot. As soon as you start working with Tabular or PowerPivot, you will see that there is one important skill you need: learning DAX. In the next few days I should publish an article that I’m finishing these days about best practices using SUMMARIZE and ADDCOLUMNS. If only someone published this article one year ago, I would have saved many hours of my life. But, you know, flight manuals are written in blood… and someone has to write! Stay tuned.

    Read the article

  • AdventureWorks 2014 Sample Databases Are Now Available

    - by aspiringgeek
      Where in the World is AdventureWorks? Recently, SQL Community feedback from twitter prompted me to look in vain for SQL Server 2014 versions of the AdventureWorks sample databases we’ve all grown to know & love. I searched Codeplex, then used the bing & even the google in an effort to locate them, yet all I could find were samples on different sites highlighting specific technologies, an incomplete collection inconsistent with the experience we users had learned to expect.  I began pinging internally & learned that an update to AdventureWorks wasn’t even on the road map.  Fortunately, SQL Marketing manager Luis Daniel Soto Maldonado (t) lent a sympathetic ear & got the update ball rolling; his direct report Darmodi Komo recently announced the release of the shiny new sample databases for OLTP, DW, Tabular, and Multidimensional models to supplement the extant In-Memory OLTP sample DB.  What Success Looks Like In my correspondence with the team, here’s how I defined success: 1. Sample AdventureWorks DBs hosted on Codeplex showcasing SQL Server 2014’s latest-&-greatest features, including:  In-Memory OLTP (aka Hekaton) Clustered Columnstore Online Operations Resource Governor IO 2. Where it makes sense to do so, consolidate the DBs (e.g., showcasing Columnstore likely involves a separate DW DB) 3. Documentation to support experimenting with these features As Microsoft Senior SDE Bonnie Feinberg (b) stated, “I think it would be great to see an AdventureWorks for SQL 2014.  It would be super helpful for third-party book authors and trainers.  It also provides a common way to share examples in blog posts and forum discussions, for example.”  Exactly.  We’ve established a rich & robust tradition of sample databases on Codeplex.  This is what our community & our customers expect.  The prompt response achieves what we all aim to do, i.e., manifests the Service Design Engineering mantra of “delighting the customer”.  Kudos to Luis’s team in SQL Server Marketing & Kevin Liu’s team in SQL Server Engineering for doing so. Download AdventureWorks 2014 Download your copies of SQL Server 2014 AdventureWorks sample databases here.

    Read the article

  • SQL SERVER – Free eBook Download – EPUB, MOBI, PDF Format

    - by pinaldave
    Microsoft has released recently free eBooks on various Microsoft Technology. The best part is that all these books are available in ePub, Mobi and PDF. You can download them to your local machine or eBook reader and read them. This is a great start as many important subjects are now covered and converted into an eBook. I personally read through a few of the books and found they are very comprehensive and and detailed. The goal is not to cover complete technology in a single book but rather pick a single topic and discuss it in detail. The source of the book is white paper, Technet wiki as well book online and it is clearly listed right bellow the book title. Following are the books available for SQL Server Technology and I encourage all of you to have a look at them as they are great resources. Master Data Services Capacity Guidelines Microsoft SQL Server AlwaysOn Solutions Guide for High Availability and Disaster Recovery Microsoft SQL Server Analysis Services Multidimensional Performance and Operations Guide QuickStart: Learn DAX Basics in 30 Minutes SQL Server 2012 Transact-SQL DML Reference You can download above eBooks from here. This is indeed a great attempt as each book talks about the a single subject in depth keeping author focus on the single and simple subject. I have previously written two books by focusing on the same subject and I had great pleasure writing it as well. Writing on focus subjects gives complete freedom to author to explore the a single subject without having burden to cover everything which is associated with that technology at large. Just like eBooks mentioned earlier my SQL Server Wait Stats was inspired from my article series on SQL Wait Stats. The latest book SQL Server Interview Question and Answers was derived from my article series on SQL Interview Q and A. Writing book is an absolutely different concept than writing blog posts. When I was converting my blog posts to books, I ended up writing 50% new material and end up removing many repetitive content which shows up in blog series. It was indeed fun to focused book at the same time it was a great learning experience as an individual. Reference: TechNet Wiki, Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Documentation, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Oracle Service Cloud May 2014 Release – Focus on your driving by JP Saunders

    - by Tuula Fai
    The next time you’re twiddling dials on your car’s dashboard to get the air to blow in the right direction, and the right song to play on the stereo, while pulling on the wires to charge your phone and punching in passwords to re-sync your hands-free headset to take a call, consider this… Does having a better dashboard UI in your car improve your driving performance? The Tesla car has one of the most modern and intuitive dashboards in any commercial car today. It is actually based on the design of a smart phone, which can download apps and updates directly from the cloud.  The 17” touchscreen, Lynx-based dashboard totally integrates all channels and devices, allowing the driver to focus on the smooth driving and power of this luxury (toy) car.  What the folks at Tesla didn't do was avoid the complexity of our needs. Instead, they streamlined them. And, while we might not all be able to afford a Tesla, their approach demonstrates that a modern UI approach can ultimately make a positive difference in our lives and businesses.  This is why the productivity and effectiveness of a Modern Contact Center is many times greater than that of a traditional contact center. Agents in a Modern Contact Center get to focus on the task at hand, the customer engagement, rather than stumbling their way through Lego blocks of complexity.  The Oracle Service Cloud is a modern approach to customer service that empowers your agents to achieve greater focus on improving your operational and strategic success through streamlined business processes.  Here are some of the recent May 2014 release highlights to the Oracle Service Cloud: Performance Enhanced Desktop UI A modern agent desktop interface that optimizes clumsy tasks, logins, screens and workflows and is optimized for agent and system performance. Improvements include performance for drag-and-drop configurable views, saved searches, and improved caching for high-speed performance even during disconnected or slow internet access.  Customer Experience Routing A streamlined automatic way to connect the right customer need to the best agent skills, based on multidimensional variables such as product skills, language skills, workload, call volume to optimize the connection and resolution experience. On-The-Go Mobile Improvements to the Agent mobile app that extend connectivity to websites, and customer surveys that are mobile-ready and rendered for any device, and ensure the customer’s voice is captured while the insight is still top of mind.  Infused Social Engagement Enhancements to infused social capabilities allow agents to respond in social threads directly from within the agent desktop, with the information becoming part of the incident record for automatic actions (such as replay or escalate) triggered off the response. Front-End Siebel Contact Center The market leading online Web Customer Self-Service interface from the Oracle Service Cloud, is now out-of-the-box ready for Oracle Siebel customers. Deploy a new online web self-service interface in a matter of weeks to have customers self-serve and self-solve answers, with escalated incidents routed directly into the Oracle Siebel Contact Center. For more information on the latest enhancements for the Oracle Service Cloud, please see the Oracle Service Cloud May 2014 Capabilities and Benefits. Related blogs: Oracle Service Cloud Feb 2014

    Read the article

  • WebCenter Customer Spotlight: Guizhou Power Grid Company

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryGuizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. The business objectives were to consolidate information contained in disparate systems into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Guizhou Power Grid Company saved more than US$693,000 in storage costs, reduced  average search times from 180 seconds to 5 seconds and solved 80% to 90% of technology and maintenance issues by searching the Oracle WebCenter Content management system. Company OverviewA wholly owned subsidiary of China Southern Power Grid Company Limited, Guizhou Power Grid Company is responsible for power grid planning, construction, management, and power distribution in Guizhou Province, serving 39 million people. Giuzhou has 49,823 employees and an annual revenue of over $5 Billion. Business ChallengesThe business objectives were to consolidate information contained in disparate systems, such as the customer relationship management and power grid management systems, into a single knowledge repository and provide a safe and efficient way for staff and managers to access, query, share, manage, and store business information. Solution DeployedGuizhou Power Grid Company  implemented Oracle WebCenter Content to build a content management system that enabled the secure, integrated management and storage of information, such as documents, records, images, Web content, and digital assets. The content management solution was integrated with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site. Business Results Saved more than US$693,000 in storage costs and shortened the material distribution time by integrating the knowledge management solution with the power grid, customer service, maintenance, and other business systems, as well as the corporate Web site Enabled staff to search 31,650 documents using catalogs, multidimensional attributes, and knowledge maps, reducing average search times from 180 seconds to 5 seconds and saving approximately 1,539 hours in annual search time Gained comprehensive document management, format transformation, security, and auditing capabilities Enabled users to upload new documents and supervisors to check the accuracy of these documents online, resulting in improved information quality control Solved 80% to 90% of technology and maintenance issues by searching the Oracle content management system for information, ensuring IT staff can respond quickly to users’ technical problems Improved security by using role-based access controls to restrict access to confidential documents and information Supported the efficient classification of corporate knowledge by using Oracle’s metadata functions to collect, tag, and archive documents, images, Web content, and digital assets “We chose Oracle WebCenter Content, as it is an outstanding integrated content management platform. It has allowed us to establish a system to access, query, share, manage, and store our corporate assets. This has laid a solid foundation for Guizhou Power Grid Company to improve management practices.” Luo Sixi, Senior Information Consultant, Guizhou Power Grid Company Additional Information Guizhou Power Grid Company Customer Snapshot Oracle WebCenter Content

    Read the article

  • Programming tourism

    - by Andrew_B
    I'm going on vacation to Paris, France for 10 days. Actually, it's my girlfriend's wish to go there but I'm not very interested in visiting, sightseeing, etc. Recently, I came up with an idea of trying to do something like programming tourism. :) I'd like to do something related to programming in a startup-like company. I do not want a salary or any kind of compensation. I want to overview process, social aspects, environment and "what it feels like" to development software in another country. I'm from Russia. I've been a software developer since 2003. I prefer C#4 but I'm ready to use anything Turing-complete. I have some MS certifications and am familiar with all .NETs since 1.1. Currently I'm finishing PhD in CS. I'm interested in multidimensional indexing and I can turn any piece of data and code to OLAP system. :) But it'd take too much time. What can I do? I have no more than one week. I want a totally complete project in a short amount of time. Implement some features in well-tested project Do a code review Debug memory, performance and concurrency issues Do unit testing So, about the questions: Is it legal? I'm ready to sign NDA if it's necessary. I'll have tourist visa. Is it possible? I'm really sure that bureaucratic companies with lots of HRs and PMs will not allow such experiments. But small companies can afford it. I'm ready to guarantee support on my code after leaving home :) P.S. I still havn't started learning French :) I hope it will not take too much time :) P.P.S. Yes, it's girlfriend-approved. What's in it for me? It's fun. It's fun to see new systems and people who created them. It's fun to complete meaningful things. Quickly. What's in it for them? Feature, debug, review or test. If my short-term colleagues will like this style of working I can invite them to make same trip into my company :) I think in Russia it's even more exciting :)

    Read the article

  • OWB 11gR2 &ndash; OLAP and Simba

    - by David Allan
    Oracle Warehouse Builder was the first ETL product to provide a single integrated and complete environment for managing enterprise data warehouse solutions that also incorporate multi-dimensional schemas. The OWB 11gR2 release provides Oracle OLAP 11g deployment for multi-dimensional models (in addition to support for prior releases of OLAP). This means users can easily utilize Simba's MDX Provider for Oracle OLAP (see here for details and cost) which allows you to use the powerful and popular ad hoc query and analysis capabilities of Microsoft Excel PivotTables® and PivotCharts® with your Oracle OLAP business intelligence data. The extensions to the dimensional modeling capabilities have been built on established relational concepts, with the option to seamlessly move from a relational deployment model to a multi-dimensional model at the click of a button. This now means that ETL designers can logically model a complete data warehouse solution using one single tool and control the physical implementation of a logical model at deployment time. As a result data warehouse projects that need to provide a multi-dimensional model as part of the overall solution can be designed and implemented faster and more efficiently. Wizards for dimensions and cubes let you quickly build dimensional models and realize either relationally or as an Oracle database OLAP implementation, both 10g and 11g formats are supported based on a configuration option. The wizard provides a good first cut definition and the objects can be further refined in the editor. Both wizards let you choose the implementation, to deploy to OLAP in the database select MOLAP: multidimensional storage. You will then be asked what levels and attributes are to be defined, by default the wizard creates a level bases hierarchy, parent child hierarchies can be defined in the editor. Once the dimension or cube has been designed there are special mapping operators that make it easy to load data into the objects, below we load a constant value for the total level and the other levels from a source table.   Again when the cube is defined using the wizard we can edit the cube and define a number of analytic calculations by using the 'generate calculated measures' option on the measures panel. This lets you very easily add a lot of rich analytic measures to your cube. For example one of the measures is the percentage difference from a year ago which we can see in detail below. You can also add your own custom calculations to leverage the capabilities of the Oracle OLAP option, either by selecting existing template types such as moving averages to defining true custom expressions. The 11g OLAP option now supports percentage based summarization (the amount of data to precompute and store), this is available from the option 'cost based aggregation' in the cube's configuration. Ensure all measure-dimensions level based aggregation is switched off (on the cube-dimension panel) - previously level based aggregation was the only option. The 11g generated code now uses the new unified API as you see below, to generate the code, OWB needs a valid connection to a real schema, this was not needed before 11gR2 and is a new requirement since the OLAP API which OWB uses is not an offline one. Once all of the objects are deployed and the maps executed then we get to the fun stuff! How can we analyze the data? One option which is powerful and at many users' fingertips is using Microsoft Excel PivotTables® and PivotCharts®, which can be used with your Oracle OLAP business intelligence data by utilizing Simba's MDX Provider for Oracle OLAP (see Simba site for details of cost). I'll leave the exotic reporting illustrations to the experts (see Bud's demonstration here), but with Simba's MDX Provider for Oracle OLAP its very simple to easily access the analytics stored in the database (all built and loaded via the OWB 11gR2 release) and get the regular features of Excel at your fingertips such as using the conditional formatting features for example. That's a very quick run through of the OWB 11gR2 with respect to Oracle 11g OLAP integration and the reporting using Simba's MDX Provider for Oracle OLAP. Not a deep-dive in any way but a quick overview to illustrate the design capabilities and integrations possible.

    Read the article

  • Anatomy of a .NET Assembly - Signature encodings

    - by Simon Cooper
    If you've just joined this series, I highly recommend you read the previous posts in this series, starting here, or at least these posts, covering the CLR metadata tables. Before we look at custom attribute encoding, we first need to have a brief look at how signatures are encoded in an assembly in general. Signature types There are several types of signatures in an assembly, all of which share a common base representation, and are all stored as binary blobs in the #Blob heap, referenced by an offset from various metadata tables. The types of signatures are: Method definition and method reference signatures. Field signatures Property signatures Method local variables. These are referenced from the StandAloneSig table, which is then referenced by method body headers. Generic type specifications. These represent a particular instantiation of a generic type. Generic method specifications. Similarly, these represent a particular instantiation of a generic method. All these signatures share the same underlying mechanism to represent a type Representing a type All metadata signatures are based around the ELEMENT_TYPE structure. This assigns a number to each 'built-in' type in the framework; for example, Uint16 is 0x07, String is 0x0e, and Object is 0x1c. Byte codes are also used to indicate SzArrays, multi-dimensional arrays, custom types, and generic type and method variables. However, these require some further information. Firstly, custom types (ie not one of the built-in types). These require you to specify the 4-byte TypeDefOrRef coded token after the CLASS (0x12) or VALUETYPE (0x11) element type. This 4-byte value is stored in a compressed format before being written out to disk (for more excruciating details, you can refer to the CLI specification). SzArrays simply have the array item type after the SZARRAY byte (0x1d). Multidimensional arrays follow the ARRAY element type with a series of compressed integers indicating the number of dimensions, and the size and lower bound of each dimension. Generic variables are simply followed by the index of the generic variable they refer to. There are other additions as well, for example, a specific byte value indicates a method parameter passed by reference (BYREF), and other values indicating custom modifiers. Some examples... To demonstrate, here's a few examples and what the resulting blobs in the #Blob heap will look like. Each name in capitals corresponds to a particular byte value in the ELEMENT_TYPE or CALLCONV structure, and coded tokens to custom types are represented by the type name in curly brackets. A simple field: int intField; FIELD I4 A field of an array of a generic type parameter (assuming T is the first generic parameter of the containing type): T[] genArrayField FIELD SZARRAY VAR 0 An instance method signature (note how the number of parameters does not include the return type): instance string MyMethod(MyType, int&, bool[][]); HASTHIS DEFAULT 3 STRING CLASS {MyType} BYREF I4 SZARRAY SZARRAY BOOLEAN A generic type instantiation: MyGenericType<MyType, MyStruct> GENERICINST CLASS {MyGenericType} 2 CLASS {MyType} VALUETYPE {MyStruct} For more complicated examples, in the following C# type declaration: GenericType<T> : GenericBaseType<object[], T, GenericType<T>> { ... } the Extends field of the TypeDef for GenericType will point to a TypeSpec with the following blob: GENERICINST CLASS {GenericBaseType} 3 SZARRAY OBJECT VAR 0 GENERICINST CLASS {GenericType} 1 VAR 0 And a static generic method signature (generic parameters on types are referenced using VAR, generic parameters on methods using MVAR): TResult[] GenericMethod<TInput, TResult>( TInput, System.Converter<TInput, TOutput>); GENERIC 2 2 SZARRAY MVAR 1 MVAR 0 GENERICINST CLASS {System.Converter} 2 MVAR 0 MVAR 1 As you can see, complicated signatures are recursively built up out of quite simple building blocks to represent all the possible variations in a .NET assembly. Now we've looked at the basics of normal method signatures, in my next post I'll look at custom attribute application signatures, and how they are different to normal signatures.

    Read the article

  • The five steps of business intelligence adoption: where are you?

    - by Red Gate Software BI Tools Team
    When I was in Orlando and New York last month, I spoke to a lot of business intelligence users. What they told me suggested a path of BI adoption. The user’s place on the path depends on the size and sophistication of their organisation. Step 1: A company with a database of customer transactions will often want to examine particular data, like revenue and unit sales over the last period for each product and territory. To do this, they probably use simple SQL queries or stored procedures to produce data on demand. Step 2: The results from step one are saved in an Excel document, so business users can analyse them with filters or pivot tables. Alternatively, SQL Server Reporting Services (SSRS) might be used to generate a report of the SQL query for display on an intranet page. Step 3: If these queries are run frequently, or business users want to explore data from multiple sources more freely, it may become necessary to create a new database structured for analysis rather than CRUD (create, retrieve, update, and delete). For example, data from more than one system — plus external information — may be incorporated into a data warehouse. This can become ‘one source of truth’ for the business’s operational activities. The warehouse will probably have a simple ‘star’ schema, with fact tables representing the measures to be analysed (e.g. unit sales, revenue) and dimension tables defining how this data is aggregated (e.g. by time, region or product). Reports can be generated from the warehouse with Excel, SSRS or other tools. Step 4: Not too long ago, Microsoft introduced an Excel plug-in, PowerPivot, which allows users to bring larger volumes of data into Excel documents and create links between multiple tables.  These BISM Tabular documents can be created by the database owners or other expert Excel users and viewed by anyone with Excel PowerPivot. Sometimes, business users may use PowerPivot to create reports directly from the primary database, bypassing the need for a data warehouse. This can introduce problems when there are misunderstandings of the database structure or no single ‘source of truth’ for key data. Step 5: Steps three or four are often enough to satisfy business intelligence needs, especially if users are sophisticated enough to work with the warehouse in Excel or SSRS. However, sometimes the relationships between data are too complex or the queries which aggregate across periods, regions etc are too slow. In these cases, it can be necessary to formalise how the data is analysed and pre-build some of the aggregations. To do this, a business intelligence professional will typically use SQL Server Analysis Services (SSAS) to create a multidimensional model — or “cube” — that more simply represents key measures and aggregates them across specified dimensions. Step five is where our tool, SSAS Compare, becomes useful, as it helps review and deploy changes from development to production. For us at Red Gate, the primary value of SSAS Compare is to establish a dialog with BI users, so we can develop a portfolio of products that support creation and deployment across a range of report and model types. For example, PowerPivot and the new BISM Tabular model create a potential customer base for tools that extend beyond BI professionals. We’re interested in learning where people are in this story, so we’ve created a six-question survey to find out. Whether you’re at step one or step five, we’d love to know how you use BI so we can decide how to build tools that solve your problems. So if you have a sixty seconds to spare, tell us on the survey!

    Read the article

  • High performance text file parsing in .net

    - by diamandiev
    Here is the situation: I am making a small prog to parse server log files. I tested it with a log file with several thousand requests (between 10000 - 20000 don't know exactly) What i have to do is to load the log text files into memory so that i can query them. This is taking the most resources. The methods that take the most cpu time are those (worst culprits first): string.split - splits the line values into a array of values string.contains - checking if the user agent contains a specific agent string. (determine browser ID) string.tolower - various purposes streamreader.readline - to read the log file line by line. string.startswith - determine if line is a column definition line or a line with values there were some others that i was able to replace. For example the dictionary getter was taking lots of resources too. Which i had not expected since its a dictionary and should have its keys indexed. I replaced it with a multidimensional array and saved some cpu time. Now i am running on a fast dual core and the total time it takes to load the file i mentioned is about 1 sec. Now this is really bad. Imagine a site that has tens of thousands of visits a day. It's going to take minutes to load the log file. So what are my alternatives? If any, cause i think this is just a .net limitation and i can't do much about it.

    Read the article

  • Reading a large file into Perl array of arrays and manipulating the output for different purposes

    - by Brian D.
    Hello, I am relatively new to Perl and have only used it for converting small files into different formats and feeding data between programs. Now, I need to step it up a little. I have a file of DNA data that is 5,905 lines long, with 32 fields per line. The fields are not delimited by anything and vary in length within the line, but each field is the same size on all 5905 lines. I need each line fed into a separate array from the file, and each field within the line stored as its own variable. I am having no problems storing one line, but I am having difficulties storing each line successively through the entire file. This is how I separate the first line of the full array into individual variables: my $SampleID = substr("@HorseArray", 0, 7); my $PopulationID = substr("@HorseArray", 9, 4); my $Allele1A = substr("@HorseArray", 14, 3); my $Allele1B = substr("@HorseArray", 17, 3); my $Allele2A = substr("@HorseArray", 21, 3); my $Allele2B = substr("@HorseArray", 24, 3); ...etc. My issues are: 1) I need to store each of the 5905 lines as a separate array. 2) I need to be able to reference each line based on the sample ID, or a group of lines based on population ID and sort them. I can sort and manipulate the data fine once it is defined in variables, I am just having trouble constructing a multidimensional array with each of these fields so I can reference each line at will. Any help or direction is much appreciated. I've poured over the Q&A sections on here, but have not found the answer to my questions yet. Thanks!! -Brian

    Read the article

  • JSON Twitter List in C#.net

    - by James
    Hi, My code is below. I am not able to extract the 'name' and 'query' lists from the JSON via a DataContracted Class (below) I have spent a long time trying to work this one out, and could really do with some help... My Json string: {"as_of":1266853488,"trends":{"2010-02-22 15:44:48":[{"name":"#nowplaying","query":"#nowplaying"},{"name":"#musicmonday","query":"#musicmonday"},{"name":"#WeGoTogetherLike","query":"#WeGoTogetherLike"},{"name":"#imcurious","query":"#imcurious"},{"name":"#mm","query":"#mm"},{"name":"#HumanoidCityTour","query":"#HumanoidCityTour"},{"name":"#awesomeindianthings","query":"#awesomeindianthings"},{"name":"#officeformac","query":"#officeformac"},{"name":"Justin Bieber","query":"\"Justin Bieber\""},{"name":"National Margarita","query":"\"National Margarita\""}]}} My code: WebClient wc = new WebClient(); wc.Credentials = new NetworkCredential(this.Auth.UserName, this.Auth.Password); string res = wc.DownloadString(new Uri(link)); //the download string gives me the above JSON string - no problems Trends trends = new Trends(); Trends obj = Deserialise<Trends>(res); private T Deserialise<T>(string json) { T obj = Activator.CreateInstance<T>(); using (MemoryStream ms = new MemoryStream(Encoding.Unicode.GetBytes(json))) { DataContractJsonSerializer serialiser = new DataContractJsonSerializer(obj.GetType()); obj = (T)serialiser.ReadObject(ms); ms.Close(); return obj; } } [DataContract] public class Trends { [DataMember(Name = "as_of")] public string AsOf { get; set; } //The As_OF value is returned - But how do I get the //multidimensional array of Names and Queries from the JSON here? }

    Read the article

  • Private member vector of vector dynamic memory allocation

    - by Geoffroy
    Hello, I'm new to C++ (I learned programming with Fortran), and I would like to allocate dynamically the memory for a multidimensional table. This table is a private member variable : class theclass{ public: void setdim(void); private: std::vector < std::vector <int> > thetable; } I would like to set the dimension of thetable with the function setdim(). void theclass::setdim(void){ this->thetable.assign(1000,std::vector <int> (2000)); } I have no problem compiling this program, but as I execute it, I've got a segmentation fault. The strange thing for me is that this piece (see under) of code does exactly what I want, except that it doesn't uses the private member variable of my class : std::vector < std::vector < int > > thetable; thetable.assign(1000,std::vector <int> (2000)); By the way, I have no trouble if thetable is a 1D vector. In theclass : std::vector < int > thetable; and if in setdim : this->thetable.assign(1000,2); So my question is : why is there such a difference with "assign" between thetable and this-thetable for a 2D vector? And how should I do to do what I want? Thank-you for your help, Best regards, -- Geoffroy

    Read the article

  • Hashtable resizing leaks memory

    - by thpetrus
    I wrote a hashtable and it basically consists of these two structures: typedef struct dictEntry { void *key; void *value; struct dictEntry *next; } dictEntry; typedef struct dict { dictEntry **table; unsigned long size; unsigned long items; } dict; dict.table is a multidimensional array, which contains all the stored key/value pair, which again are a linked list. If half of the hashtable is full, I expand it by doubling the size and rehashing it: dict *_dictRehash(dict *d) { int i; dict *_d; dictEntry *dit; _d = dictCreate(d->size * 2); for (i = 0; i < d->size; i++) { for (dit = d->table[i]; dit != NULL; dit = dit->next) { _dictAddRaw(_d, dit); } } /* FIXME memory leak because the old dict can never be freed */ free(d); // seg fault return _d; } The function above uses the pointers from the old hash table and stores it in the newly created one. When freeing the old dict d a Segmentation Fault occurs. How am I able to free the old hashtable struct without having to allocate the memory for the key/value pairs again?

    Read the article

  • How do I return an array from a method?

    - by dwwilson66
    I'm trying to create a deck of cards for my homework. Code is posted below. I need to create four sets of cards (the four suits) and am create a multidimensional array. When I print the results instead of trying to pass the array, I can see that the data in the array is as expected. However, when I try to pass the array card, I get an error cannot find symbol. I've got this modeled after texbook and Java tutorial examples, and I need some help figuring out what I'm missing. I've over-documented to give an idea of how I'm thinking this SHOULD work...please let me know where I've gone horribly wrong in my understanding. import java.util.*; import java.lang.*; // public class CardGame { public static int[][] main(String[] args) { int[][] startDeck = deckOfCards(); /* cast new deck as int[][], calling method deckOfCards System.out.println(" /// from array: " + Arrays.deepToString(startDeck)); } public static int[][] deckOfCards() /* method to return a multi-dimensional array */ { int rank; int suit; for(rank=1;rank<14;rank++) /* cards 1 - 13 .... */ { for(suit=1;suit<5;suit++) /* suits 1 - 4 .... */ { int[][] card = new int[][] /* define a new card... */ { {rank,suit} /* with rank/suit from for... loops */ }; System.out.println(" /// from array: " + Arrays.deepToString(card)); } } return card; /* Error: cannot find symbol } }

    Read the article

  • FFT and IFFT on 3D matrix (Matlab)

    - by SteffenDM
    I have a movie with 70 grayscale frames in MATLAB. I have put them in a 3-D matrix, so the dimensions are X, Y and time. I want to determine the frequencies in the time dimension, so I have to calculate the FFT for every point in the 3rd dimension. This is not a problem but I have to return the images to the original form with ifft. In a normal situation this would be true: X = ifft(fft(X)), but this is not the case it seems in MATLAB when you work with multidimensional data. This is the code I use: for i = 1:length y(:, :, i) = [img1{i, level}]; %# take each picture from an cell array and put it end %# and put it in 3D array y2 = ifft(fft(y, NFFT,3), NFFT, 3); %# NFFT = 128, the 3 is the dimension in which i want %# to calculate the FFT and IFFT y is 480x640x70, so there are 70 images of 640x480 pixels. If I use only fft, y2 is 480x640x128 (this is normal because we want 128 points with NFFT). If I use fft and ifft, y2 is 480x640x128 pixels. This is not normal, the 128 should be 70 again. I tried to do it in just one dimension by using 2 for loops and this works fine. The for loops take to much time, though.

    Read the article

  • Why might my PHP log file not entirely be text?

    - by Fletcher Moore
    I'm trying to debug a plugin-bloated Wordpress installation; so I've added a very simple homebrew logger that records all the callbacks, which are basically listed in a single, ultimately 250+ row multidimensional array in Wordpress (I can't use print_r() because I need to catch them right before they are called). My logger line is $logger->log("\t" . $callback . "\n"); The logger produces a dandy text file in normal situations, but at two points during this particular task it is adding something which causes my log file to no longer be encoded properly. Gedit (I'm on Ubuntu) won't open the file, claiming to not understand the encoding. In vim, the culprit corrupt callback (which I could not find in the debugger, looking at the array) is about in the middle and printed as ^@lambda_546 and at the end of file there's this cute guy ^M. The ^M and ^@ are blue in my vim, which has no color theme set for .txt files. I don't know what it means. I tried adding an is_string($callback) condition, but I get the same results. Any ideas?

    Read the article

  • Translating 3-dimensional array reference onto 1-dimensional array

    - by user146780
    If there is an array of ar[5000] then how could I find where element [5][5][4] would be if this was a 3 dimensional array? Thanks I'm mapping pixels: imagine a bimap of [768 * 1024 * 4] where would pixel [5][5][4] be? I want to make this: static GLubyte checkImage[checkImageHeight][checkImageWidth][4]; static GLuint texName; bool itt; void makeCheckImage(void) { Bitmap *b = new Bitmap(L"c:/boo.png"); int i, j, c; Color cul; for (i = 0; i < checkImageHeight; i++) { for (j = 0; j < checkImageWidth; j++) { b->GetPixel(j,i,&cul); checkImage[i][j][0] = (GLubyte) cul.GetR(); checkImage[i][j][1] = (GLubyte) cul.GetG(); checkImage[i][j][2] = (GLubyte) cul.GetB(); checkImage[i][j][3] = (GLubyte) cul.GetA(); } } delete(b); } work without making a multidimensional array. width = 512, height = 1024....

    Read the article

  • Use string to store statement (or part of a statement), and then add it to the code

    - by Dean
    I use multidimensional arrays to store product attributes (well Virtuemart does, to be precise). When I tried to echo the sub-arrays value, if the sub-array did not exist PHP threw: Fatal error: Cannot use string offset as an array To get around this, I attempted to create a function to check on every array level if it is an actual array, and if it is empty (when trying on the whole thing at once such as: is_array($array['level1']['level2']['level3']), I got the same error if level1 or level2 are not actual arrays). This is the function ($array contains the array to check, $array_levels is an array containing the names of the sub-arrays, in the order they should apper): function check_md_array($array,$array_levels){ if(is_array($array)){ $dimension = null; //This will store the dimensions string foreach($array_levels as $level){ $dimension .= "['" . $level . "']"; //Add the current dimension to the dimensions string if(!is_array($array/* THE CONTENT OF $dimension SHOULD BE INSERTED HERE*/)){ return false; } } return true; } } How can I take the string contained in $dimensions, and insert it into the code, to be part of the statement?

    Read the article

  • C release dynamically allocated memory

    - by user1152463
    I have defined function, which returns multidimensional array. allocation for rows arr = (char **)malloc(size); allocation for columns (in loop) arr[i] = (char *)malloc(v); and returning type is char** Everything works fine, except freeing the memory. If I call free(arr[i]) and/or free(arr) on array returned by function, it crashes. Thanks for help EDIT:: allocating fuction pole = malloc(zaznamov); char ulica[52], t[52], datum[10]; float dan; int i = 0, v; *max = 0; while (!is_eof(f)) { get_record(t, ulica, &dan, datum, f); v = strlen(ulica) - 1; pole[i] = malloc(v); strcpy(pole[i], ulica); pole[i][v] = '\0'; if (v > *max) { *max = v; } i++; } return pole;` part of main where i am calling function pole = function(); releasing memory int i; for (i = 0; i < zaznamov; i++) { free(pole[i]); pole[i] = NULL; } free(pole); pole = NULL;

    Read the article

< Previous Page | 14 15 16 17 18 19 20  | Next Page >