Search Results

Search found 6110 results on 245 pages for 'graph databases'.

Page 62/245 | < Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >

  • Is it better to use a relational database or document-based database for an app like Wufoo?

    - by mboyle
    I'm working on an application that's similar to Wufoo in that it allows our users to create their own databases and collect/present records with auto generated forms and views. Since every user is creating a different schema (one user might have a database of their baseball card collection, another might have a database of their recipes) our current approach is using MySQL to create separate databases for every user with its own tables. So in other words, the databases our MySQL server contains look like: main-web-app-db (our web app containing tables for users account info, billing, etc) user_1_db (baseball_cards_table) user_2_db (recipes_table) .... And so on. If a user wants to set up a new database to keep track of their DVD collection, we'd do a "create database ..." with "create table ...". If they enter some data in and then decide they want to change a column we'd do an "alter table ....". Now, the further along I get with building this out the more it seems like MySQL is poorly suited to handling this. 1) My first concern is that switching databases every request, first to our main app's database for authentication etc, and then to the user's personal database, is going to be inefficient. 2) The second concern I have is that there's going to be a limit to the number of databases a single MySQL server can host. Pretending for a moment this application had 500,000 user databases, is MySQL designed to operate this way? What if it were a million, or more? 3) Lastly, is this method going to be a nightmare to support and scale? I've never heard of MySQL being used in this way so I do worry about how this affects things like replication and other methods of scaling. To me, it seems like MySQL wasn't built to be used in this way but what do I know. I've been looking at document-based databases like MongoDB, CouchDB, and Redis as alternatives because it seems like a schema-less approach to this particular problem makes a lot of sense. Can anyone offer some advice on this?

    Read the article

  • MySQL Tables Missing/Corrupt After Recreation

    - by Synetech inc.
    Hi, Yesterday I dumped my MySQL databases to an SQL file and renamed the ibdata1 file. I then recreated it and imported the SQL file and moved the new ibdata1 file to my MySQL data directory, deleting the old one. I’ve done it before without issue, however this time something is not right. When I examine the (personal, not MySQL config) databases, they are all there, but they are empty… sort of. The data directory still has the .ibd files with the correct content in them and I can view the table list in the databases, but not the tables themselves. (I have file-per-table enabled, and am using InnoDB as default for everything.) For example with the urls database and its urls table, I can successfully open mysql.exe or phpMyAdmin and use urls;. I can even show tables; to see the expected table, but then when I try to describe urls; or select * from urls;, it complains that the table does not exist (even though it just listed it). (The MySQL Administrator lists the databases, but does not even list the tables, it indicates that the dbs are completely empty.) The problem now is that I have already deleted the SQL file (and cannot recover it even after scouring my hard-drive). So I am trying to figure out a way to repair these databases/tables. I can’t use the table repair function since it complains that the table does not exist, and I can’t dump them because again, it complains that the tables don’t exist. Like I’ve said, the data itself is still present in the .ibd files and the table names are present. I just need a way to get MySQL to recognize that the tables exist in the databases (I can find the column names of the tables in question in the ibdata1 file using a hex-editor). Any idea how I can repair this type of corruption? I don’t mind rolling up my sleeves, digging in, and taking a bunch of steps to fix it. Thanks a lot.

    Read the article

  • P values in wilcox.test gone mad :(

    - by Error404
    I have a code that isn't doing what it should do. I am testing P value for a wilcox.test for a huge set of data. the code i am using is the following library(MASS) data1 <- read.csv("file1path.csv",header=T,sep=",") data2 <- read.csv("file2path.csv",header=T,sep=",") data3 <- read.csv("file3path.csv",header=T,sep=",") data4 <- read.csv("file4path.csv",header=T,sep=",") data1$K <- with(data1,{"N"}) data2$K <- with(data2,{"E"}) data3$K <- with(data3,{"M"}) data4$K <- with(data4,{"U"}) new=rbind(data1,data2,data3,data4) i=3 for (o in 1:4800){ x1 <- data1[,i] x2 <- data2[,i] x3 <- data3[,i] x4 <- data4[,i] wt12 <- wilcox.test(x1,x2, na.omit=TRUE) wt13 <- wilcox.test(x1,x3, na.omit=TRUE) wt14 <- wilcox.test(x1,x4, na.omit=TRUE) if (wt12$p.value=="NaN"){ print("This is wrong") } else if (wt12$p.value < 0.05){ print(wt12$p.value) mypath=file.path("C:", "all1-less-05", (paste("graph-data1-data2",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } if (wt13$p.value=="NaN"){ print("This is wrong") } else if (wt13$p.value < 0.05){ print(wt13$p.value) mypath=file.path("C:", "all2-less-05", (paste("graph-data1-data3",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } if (wt14$p.value=="NaN"){ print("This is wrong") } else if (wt14$p.value < 0.05){ print(wt14$p.value) mypath=file.path("C:", "all3-less-05", (paste("graph-data1-data4",names(data1[i]), ".pdf", sep="-"))) pdf(file=mypath) mytitle = paste("graph",names(data1[i])) boxplot(new[,i] ~ new$K, main = mytitle, names.arg=c("data1","data2","data3","data4")) dev.off() } i=i+1 } I am having 2 problems with this long command: 1- Without specifying a certain P value, the code gives me arouind 14,000 graphs, when specifying a p value less than 0.05 the number of graphs generated goes down to 9,0000. THE FIRST PROBLEM IS: Some P value are more than 0.05 and are still showing up! 2- I designed the program to give me a result of "This is wrong" when the Value of P is "NaN", I am getting results of "NaN" Here's a screenshot from the results do you know what the mistake i made with the command to get these errors? Thanks in advance

    Read the article

  • Representing complex object dependencies

    - by max
    I have several classes with a reasonably complex (but acyclic) dependency graph. All the dependencies are of the form: class X instance contains an attribute of class Y. All such attributes are set during initialization and never changed again. Each class' constructor has just a couple parameters, and each object knows the proper parameters to pass to the constructors of the objects it contains. class Outer is at the top of the dependency hierarchy, i.e., no class depends on it. Currently, the UI layer only creates an Outer instance; the parameters for Outer constructor are derived from the user input. Of course, Outer in the process of initialization, creates the objects it needs, which in turn create the objects they need, and so on. The new development is that the a user who knows the dependency graph may want to reach deep into it, and set the values of some of the arguments passed to constructors of the inner classes (essentially overriding the values used currently). How should I change the design to support this? I could keep the current approach where all the inner classes are created by the classes that need them. In this case, the information about "user overrides" would need to be passed to Outer class' constructor in some complex user_overrides structure. Perhaps user_overrides could be the full logical representation of the dependency graph, with the overrides attached to the appropriate edges. Outer class would pass user_overrides to every object it creates, and they would do the same. Each object, before initializing lower level objects, will find its location in that graph and check if the user requested an override to any of the constructor arguments. Alternatively, I could rewrite all the objects' constructors to take as parameters the full objects they require. Thus, the creation of all the inner objects would be moved outside the whole hierarchy, into a new controller layer that lies between Outer and UI layer. The controller layer would essentially traverse the dependency graph from the bottom, creating all the objects as it goes. The controller layer would have to ask the higher-level objects for parameter values for the lower-level objects whenever the relevant parameter isn't provided by the user. Neither approach looks terribly simple. Is there any other approach? Has this problem come up enough in the past to have a pattern that I can read about? I'm using Python, but I don't think it matters much at the design level.

    Read the article

  • Database Mirroring of SQL server

    - by jbp117
    I have two databases that are mirrored to another server using database mirroring. The mirror server has to be down for some reason for few days. Now the production server is having principal databases in (PRINCIPAL/DISCONNECTED) State. Clients can access those databases. So what happens when they keep on adding data to these databases?? Will the data get committed or waits till the mirror comes up?

    Read the article

  • Is MySQL Replication Appropriate in this case?

    - by MJB
    I have a series of databases, each of which is basically standalone. It initially seemed like I needed a replication solution, but the more I researched it, the more it felt like replication was overkill and not useful anyway. I have not done MySQL replication before, so I have been reading up on the online docs, googling, and searching SO for relevant questions, but I can't find a scenario quite like mine. Here is a brief description of my issue: The various databases almost never have a live connection to each other. They need to be able to "sync" by copying files to a thumb drive and then moving them to the proper destination. It is OK for the data to not match exactly, but they should have the same parent-child relationships. That is, if a generated key differs between databases, no big deal. But the visible data must match. Timing is not critical. Updates can be done a week later, or even a month later, as long as they are done eventually. Updates cannot be guaranteed to be in the proper order, or in any order for that matter. They will be in order from each database; just not between databases. Rather than a set of master-slave relationships, it is more like a central database (R/W) and multiple remote databases (also R/W). I won't know how many remote databases I have until they are created. And the central DB won't know that a database exists until data arrives from it. (To me, this implies I cannot use the method of giving each its own unique identity range to guarantee uniqueness in the central database.) It appears to me that the bottom line is that I don't want "replication" so much as I want "awareness". I want the central database to know what happened in the remote databases, but there is no time requirement. I want the remote databases to be aware of the central database, but they don't need to know about each other. WTH is my question? It is this: Does this scenario sound like any of the typical replication scenarios, or does it sound like I have to roll my own? Perhaps #7 above is the only one that matters, and given that requirement, out-of-the-box replication is impossible. EDIT: I realize that this question might be more suited to ServerFault. I also searched there and found no answers to my questions. And based on the replication questions I did find both on SO and SF, it seemed that the decision was 50-50 over where to put my question. Sorry if I guessed wrong.

    Read the article

  • how to create a wind rose in excel 2007

    - by Patrick
    I am attempting to create a wind rose graph (i.e.- here or here). My data is wind speed and cardinal wind direction in separate columns: Wind (mph) Wind Direction 3.66 SE 2.69 SE 2.62 SW 2.76 SW 2.11 NW 3.13 NW 3.55 SW 3.62 W My final goal is to actually create the graph with a VBA macro, but I am unsure how to even create the graph manually. I can, if need be, convert the cardinal directions to degrees. Any help is greatly appreciated

    Read the article

  • ganglia graphs like munin for cpu, etc?

    - by CarpeNoctem
    I'm coming from munin and a CPU graph contains data for system, user, nice, etc ALL on one graph. I just installed ganglia and setup the basic monitoring. It appears that each type of cpu data is a separate graph! WTF is this and can I change the defaults to combine these into a single per host? That is my question, how do I combine cpu data into a single graph. Also, can I change the layout to something closer to munin's day-week side-by-side layout? I'm trying to be impartial and give ganglia a chance. ;)

    Read the article

  • Annotating Graphs From Textual Data

    - by steven
    I've got a graph that was generated from a data set that contains: (date, value, annotation) The annotation is a constant value [its either there or is blank] and I would like to add in the third bit of data into the graph I have. An example of this is in the image. The blue line is a graph of the (date, value) graph, and I would like to add in the red dots as graphing (date, annotation@value). Is there an easy way to do this in excel, without having to modify the appearance of the data?

    Read the article

  • How to change x-axis min/max of Column chart in Excel?

    - by Ian Boyd
    Here i have a column chart of binomial distribution, showing how many times you can expect to roll a six in 235 dice rolls: Note: You could also call it a binomial mass distribution for p=1/6, n=235 Now that graph is kinda squooshed. i'd like to change the Minimum and Maximum on the horizontal axis. i'd like to change them to: Minimum: 22 Maximum: 57 Meaning i want to zoom in on this section of the graph: Bonus points to the reader who can say how the numbers 22 and 57 were arrived at If this were a Scatter graph in Excel, i could adjust the horizintal axis minimum and maximum as i desired: Unfortunately, this is a Column chart, where there are no options to adjust the minimum and maximum limits of the ordinate axis: i can do a pretty horrible thing to the graph in Photoshop, but it's not very useful afterwards: Question: how to a change the x-axis minimum and maximum of a Column chart in Excel (2007)?

    Read the article

  • Excel 2010: How to color the area between charts?

    - by Quasdunk
    Hello, I asked this question already on stackoverflow but it hasn't been answered yet. Instead I was advised to try it here, so here I go :) So there's that simple XY-Line-Chart in Excel (2010). It is surrounded by two other graphs which are parallel but offset by the same factor in both the positive and negative direction - something like this: ---------------- (positively offset parallel graph) ---------------- (main graph) ---------------- (negatively offset parallel graph) Now I want to color the space between the main graph and the offset ones, but I just can't seem to find a way! Is it maybe possible with VBA? Or is there maybe a solution for Excel 2007?

    Read the article

  • Jung Meets the NetBeans Platform

    - by Geertjan
    Here's a small Jung diagram in a NetBeans Platform application: And the code, copied directly from the Jung 2.0 Tutorial:  public final class JungTopComponent extends TopComponent { public JungTopComponent() { initComponents(); setName(Bundle.CTL_JungTopComponent()); setToolTipText(Bundle.HINT_JungTopComponent()); setLayout(new BorderLayout()); Graph sgv = getGraph(); Layout<Integer, String> layout = new CircleLayout(sgv); layout.setSize(new Dimension(300, 300)); BasicVisualizationServer<Integer, String> vv = new BasicVisualizationServer<Integer, String>(layout); vv.setPreferredSize(new Dimension(350, 350)); add(vv, BorderLayout.CENTER); } public Graph getGraph() { Graph<Integer, String> g = new SparseMultigraph<Integer, String>(); g.addVertex((Integer) 1); g.addVertex((Integer) 2); g.addVertex((Integer) 3); g.addEdge("Edge-A", 1, 2); g.addEdge("Edge-B", 2, 3); Graph<Integer, String> g2 = new SparseMultigraph<Integer, String>(); g2.addVertex((Integer) 1); g2.addVertex((Integer) 2); g2.addVertex((Integer) 3); g2.addEdge("Edge-A", 1, 3); g2.addEdge("Edge-B", 2, 3, EdgeType.DIRECTED); g2.addEdge("Edge-C", 3, 2, EdgeType.DIRECTED); g2.addEdge("Edge-P", 2, 3); return g; } And here's what someone who attended a NetBeans Platform training course in Poland has done with Jung and the NetBeans Platform: The source code for the above is on Git: git://gitorious.org/j2t/j2t.git

    Read the article

  • Finding all the shortest paths between two nodes in unweighted directed graphs using BFS algorithm

    - by andra-isan
    Hi All, I am working on a problem that I need to find all the shortest path between two nodes in a given directed unweighted graph. I have used BFS algorithm to do the job, but unfortunately I can only print one shortest path not all of them, for example if they are 4 paths having lenght 3, my algorithm only prints the first one but I would like it to print all the four shortest paths. I was wondering in the following code, how should I change it so that all the shortest paths between two nodes could be printed out? class graphNode{ public: int id; string name; bool status; double weight;}; map<int, map<int,graphNode>* > graph; int Graph::BFS(graphNode &v, graphNode &w){ queue <int> q; map <int, int> map1; // this is to check if the node has been visited or not. std::string str= ""; map<int,int> inQ; // just to check that we do not insert the same iterm twice in the queue map <int, map<int, graphNode>* >::iterator pos; pos = graph.find(v.id); if(pos == graph.end()) { cout << v.id << " does not exists in the graph " <<endl; return 1; } int parents[graph.size()+1]; // this vector keeps track of the parents for the node parents[v.id] = -1; // there is a direct path between these two words, simply print that path as the shortest path if (findDirectEdge(v.id,w.id) == 1 ){ cout << " Shortest Path: " << v.id << " -> " << w.id << endl; return 1; } //if else{ int gn; map <int, map<int, graphNode>* >::iterator pos; q.push(v.id); inQ.insert(make_pair(v.id, v.id)); while (!q.empty()){ gn = q.front(); q.pop(); map<int, int>::iterator it; cout << " Popping: " << gn <<endl; map1.insert(make_pair(gn,gn)); //backtracing to print all the nodes if gn is the same as our target node such as w.id if (gn == w.id){ int current = w.id; cout << current << " - > "; while (current!=v.id){ current = parents[current]; cout << current << " -> "; } cout <<endl; } if ((pos = graph.find(gn)) == graph.end()) { cout << " pos is empty " <<endl; continue; } map<int, graphNode>* pn = pos->second; map<int, graphNode>::iterator p = pn->begin(); while(p != pn->end()) { map<int, int>::iterator it; //map1 keeps track of the visited nodes it = map1.find(p->first); graphNode gn1= p->second; if (it== map1.end()) { map<int, int>::iterator it1; //if the node already exits in the inQ, we do not insert it twice it1 = inQ.find(p->first); if (it1== inQ.end()){ parents[p->first] = gn; cout << " inserting " << p->first << " into the queue " <<endl; q.push(p->first); // add it to the queue } //if } //if p++; } //while } //while } I do appreciate all your great help Thanks, Andra

    Read the article

  • Should we use Visual Studio 2010 for all SQL Server Database Development?

    - by Luke
    Our company currently has seven dedicated SQL Server 2008 servers each running an average of 10 databases. All databases have many stored procedures and UDFs that commonly reference other databases both on the same server and also across linked servers. We currently use SSMS for all database related administration and development but we have recently purchased Visual Studio 2010 primarily for ongoing C# WinForms and ASP.NET development. I have used VS2010 to perform schema comparisons when rolling out changes from a development server into production and I'm finding it great for this task. We would like to consider using VS2010 for all database development going forward but as far as I understand, we would have to set up ALL databases as projects because of the dependencies on linked servers etc. My question is, do you have any experience using VS2010 for database development in a similar environment? Is it easy to use in tandem with SSMS or is it a one way street once VS2010 projects have been set up for all databases? Can you make any recommendations/impart any experience with a similar scenario? Thanks, Luke

    Read the article

  • Genetic algorithms with large chromosomes

    - by Howie
    I'm trying to solve the graph partitioning problem on large graphs (between a billion and trillion elements) using GA. The problem is that even one chromosome will take several gigs of memory. Are there any general compression techniques for chromosome encoding? Or should I look into distributed GA? NOTE: using some sort of evolutionary algorithm for this problem is a must! GA seems to be the best fit (although not for such large chromosomes). EDIT: I'm looking for state-of-the-art methods that other authors have used to solved the problem of large chromosomes. Note that I'm looking for either a more general solution or a solution particular to graph partitioning. Basically I'm looking for related works, as I, too, am attempting using GA for the problem of graph partitioning. So far I haven't found anyone that might have this problem of large chromosomes nor has tried to solve it.

    Read the article

  • How/where to run the algorithm on large dataset?

    - by niko
    I would like to run the PageRank algorithm on graph with 4 000 000 nodes and around 45 000 000 edges. Currently I use neo4j graph databse and classic relational database (postgres) and for software projects I mostly use C# and Java. Does anyone know what would be the best way to perform a PageRank computation on such graph? Is there any way to modify the PageRank algorithm in order to run it at home computer or server (48GB RAM) or is there any useful cloud service to push the data along the algorithm and retrieve the results? At this stage the project is at the research stage so in case of using cloud service if possible, would like to use such provider that doesn't require much administration and service setup, but instead focus just on running the algorith once and get the results without much overhead administration work.

    Read the article

  • How should I make searching a relational database more efficient?

    - by Travis J
    This is in the scope of a web application. I have a database which has a few nested relations. There is a feature which depicts the history of a large chain of relations. It is essentially a data analysis feature. The issue is that in order to search, a large object graph must be loaded - the loading time for this object graph is not quick enough to be viable. The problem is that without loading the whole graph it makes searching from a single string nearly impossible. In order to search, explicit fields must be specified and the search data supplied. Is there a design pattern for exposing the data in a way which facilitates a single string search instead of having to explicitly define parameters?

    Read the article

  • Essbase BSO Data Fragmentation

    - by Ann Donahue
    Essbase BSO Data Fragmentation Data fragmentation naturally occurs in Essbase Block Storage (BSO) databases where there are a lot of end user data updates, incremental data loads, many lock and send, and/or many calculations executed.  If an Essbase database starts to experience performance slow-downs, this is an indication that there may be too much fragmentation.  See Chapter 54 Improving Essbase Performance in the Essbase DBA Guide for more details on measuring and eliminating fragmentation: http://docs.oracle.com/cd/E17236_01/epm.1112/esb_dbag/daprcset.html Fragmentation is likely to occur in the following situations: Read/write databases that users are constantly updating data Databases that execute calculations around the clock Databases that frequently update and recalculate dense members Data loads that are poorly designed Databases that contain a significant number of Dynamic Calc and Store members Databases that use an isolation level of uncommitted access with commit block set to zero There are two types of data block fragmentation Free space tracking, which is measured using the Average Fragmentation Quotient statistic. Block order on disk, which is measured using the Average Cluster Ratio statistic. Average Fragmentation Quotient The Average Fragmentation Quotient ratio measures free space in a given database.  As you update and calculate data, empty spaces occur when a block can no longer fit in its original space and will either append at the end of the file or fit in another empty space that is large enough.  These empty spaces take up space in the .PAG files.  The higher the number the more empty spaces you have, therefore, the bigger the .PAG file and the longer it takes to traverse through the .PAG file to get to a particular record.  An Average Fragmentation Quotient value of 3.174765 means the database is 3% fragmented with free space. Average Cluster Ratio Average Cluster Ratio describes the order the blocks actually exist in the database. An Average Cluster Ratio number of 1 means all the blocks are ordered in the correct sequence in the order of the Outline.  As you load data and calculate data blocks, the sequence can start to be out of order.  This is because when you write to a block it may not be able to place back in the exact same spot in the database that it existed before.  The lower this number the more out of order it becomes and the more it affects performance.  An Average Cluster Ratio value of 1 means no fragmentation.  Any value lower than 1 i.e. 0.01032828 means the data blocks are getting further out of order from the outline order. Eliminating Data Block Fragmentation Both types of data block fragmentation can be removed by doing a dense restructure or export/clear/import of the data.  There are two types of dense restructure: 1. Implicit Restructures Implicit dense restructure happens when outline changes are done using EAS Outline Editor or Dimension Build. Essbase restructures create new .PAG files restructuring the data blocks in the .PAG files. When Essbase restructures the data blocks, it regenerates the index automatically so that index entries point to the new data blocks. Empty blocks are NOT removed with implicit restructures. 2. Explicit Restructures Explicit dense restructure happens when a manual initiation of the database restructure is executed. An explicit dense restructure is a full restructure which comprises of a dense restructure as outlined above plus the removal of empty blocks Empty Blocks vs. Fragmentation The existence of empty blocks is not considered fragmentation.  Empty blocks can be created through calc scripts or formulas.  An empty block will add to an existing database block count and will be included in the block counts of the database properties.  There are no statistics for empty blocks.  The only way to determine if empty blocks exist in an Essbase database is to record your current block count, export the entire database, clear the database then import the exported data.  If the block count decreased, the difference is the number of empty blocks that had existed in the database.

    Read the article

  • Servlet : Usage of Constants.java class vs context param

    - by Pongsakorn Semsuwan
    I'm just wondering whether to keep some of my variables in Constants class or keep it in web.xml Say, I want to keep a variable of Facebook graph API prefix or api_key, client_id From my understand, the difference between Constants.java and web.xml is web.xml is easier to rewrite on compile using ant. So, you can replace your variables in web.xml according to what environment you are building you app for. (client_id varies by development environment/production environment, for example) If I understand it right, then Facebook graph API prefix should be kept in Constants.java (because it always is "https://graph.facebook.com/") and api_key, client_id should be kept in web.xml? What's the proper way to use them?

    Read the article

  • Facebook Insights numbers don't add up

    - by ChristopherJ
    I'm having trouble understanding the Facebook stats for a Fan page under reach as they don't seem to add up. Under the reach tab, if I expand the countries demographics at the top, then the countries listed add up to around 16k people. But scrolling down under reach, the graph reaches as high as 260k in a particular week. From what they say, it seems that both the graph and demographics are talking about unique users, so where are the other 244k unique users that are in the graph but seemingly not from any country on Earth?

    Read the article

  • How to compare page views/visitors to site totals in Google Analytics?

    - by frntk
    I want to measure the effects in total traffic (views or visitors) of a single URL across a time frame. For example, let's say I published a blog post on July 2012. I want to measure what chunk of the site's total traffic is coming to this particular post on a month-by-month graph from July 2012 to date. Is there a way to do this? Edit: I should clarify that what I'd like to do is to generate a month-to-month graph, similar to the one you can see comparing a metric on a period of time against the previous equivalent period (year vs last year, for example) but comparing a particular URL traffic to the site's total traffic. In other words, I can get to the part where I can see the URL's traffic for the timeframe I selected and it shows how much of the total traffic corresponds to the current URL: But I'd like to add a second line to the graph, representing the site's total pageviews, similiar to this (but this one is showing just a different metric for the same URL): Thanks!

    Read the article

  • creating objects from trivial graph format text file. java. dijkstra algorithm.

    - by user560084
    i want to create objects, vertex and edge, from trivial graph format txt file. one of programmers here suggested that i use trivial graph format to store data for dijkstra algorithm. the problem is that at the moment all the information, e.g., weight, links, is in the sourcecode. i want to have a separate text file for that and read it into the program. i thought about using a code for scanning through the text file by using scanner. but i am not quite sure how to create different objects from the same file. could i have some help please? the file is v0 Harrisburg v1 Baltimore v2 Washington v3 Philadelphia v4 Binghamton v5 Allentown v6 New York # v0 v1 79.83 v0 v5 81.15 v1 v0 79.75 v1 v2 39.42 v1 v3 103.00 v2 v1 38.65 v3 v1 102.53 v3 v5 61.44 v3 v6 96.79 v4 v5 133.04 v5 v0 81.77 v5 v3 62.05 v5 v4 134.47 v5 v6 91.63 v6 v3 97.24 v6 v5 87.94 and the dijkstra algorithm code is Downloaded from: http://en.literateprograms.org/Special:Downloadcode/Dijkstra%27s_algorithm_%28Java%29 */ import java.util.PriorityQueue; import java.util.List; import java.util.ArrayList; import java.util.Collections; class Vertex implements Comparable<Vertex> { public final String name; public Edge[] adjacencies; public double minDistance = Double.POSITIVE_INFINITY; public Vertex previous; public Vertex(String argName) { name = argName; } public String toString() { return name; } public int compareTo(Vertex other) { return Double.compare(minDistance, other.minDistance); } } class Edge { public final Vertex target; public final double weight; public Edge(Vertex argTarget, double argWeight) { target = argTarget; weight = argWeight; } } public class Dijkstra { public static void computePaths(Vertex source) { source.minDistance = 0.; PriorityQueue<Vertex> vertexQueue = new PriorityQueue<Vertex>(); vertexQueue.add(source); while (!vertexQueue.isEmpty()) { Vertex u = vertexQueue.poll(); // Visit each edge exiting u for (Edge e : u.adjacencies) { Vertex v = e.target; double weight = e.weight; double distanceThroughU = u.minDistance + weight; if (distanceThroughU < v.minDistance) { vertexQueue.remove(v); v.minDistance = distanceThroughU ; v.previous = u; vertexQueue.add(v); } } } } public static List<Vertex> getShortestPathTo(Vertex target) { List<Vertex> path = new ArrayList<Vertex>(); for (Vertex vertex = target; vertex != null; vertex = vertex.previous) path.add(vertex); Collections.reverse(path); return path; } public static void main(String[] args) { Vertex v0 = new Vertex("Nottinghill_Gate"); Vertex v1 = new Vertex("High_Street_kensignton"); Vertex v2 = new Vertex("Glouchester_Road"); Vertex v3 = new Vertex("South_Kensignton"); Vertex v4 = new Vertex("Sloane_Square"); Vertex v5 = new Vertex("Victoria"); Vertex v6 = new Vertex("Westminster"); v0.adjacencies = new Edge[]{new Edge(v1, 79.83), new Edge(v6, 97.24)}; v1.adjacencies = new Edge[]{new Edge(v2, 39.42), new Edge(v0, 79.83)}; v2.adjacencies = new Edge[]{new Edge(v3, 38.65), new Edge(v1, 39.42)}; v3.adjacencies = new Edge[]{new Edge(v4, 102.53), new Edge(v2, 38.65)}; v4.adjacencies = new Edge[]{new Edge(v5, 133.04), new Edge(v3, 102.53)}; v5.adjacencies = new Edge[]{new Edge(v6, 81.77), new Edge(v4, 133.04)}; v6.adjacencies = new Edge[]{new Edge(v0, 97.24), new Edge(v5, 81.77)}; Vertex[] vertices = { v0, v1, v2, v3, v4, v5, v6 }; computePaths(v0); for (Vertex v : vertices) { System.out.println("Distance to " + v + ": " + v.minDistance); List<Vertex> path = getShortestPathTo(v); System.out.println("Path: " + path); } } } and the code for scanning file is import java.util.Scanner; import java.io.File; import java.io.FileNotFoundException; public class DataScanner1 { //private int total = 0; //private int distance = 0; private String vector; private String stations; private double [] Edge = new double []; /*public int getTotal(){ return total; } */ /* public void getMenuInput(){ KeyboardInput in = new KeyboardInput; System.out.println("Enter the destination? "); String val = in.readString(); return val; } */ public void readFile(String fileName) { try { Scanner scanner = new Scanner(new File(fileName)); scanner.useDelimiter (System.getProperty("line.separator")); while (scanner.hasNext()) { parseLine(scanner.next()); } scanner.close(); } catch (FileNotFoundException e) { e.printStackTrace(); } } public void parseLine(String line) { Scanner lineScanner = new Scanner(line); lineScanner.useDelimiter("\\s*,\\s*"); vector = lineScanner.next(); stations = lineScanner.next(); System.out.println("The current station is " + vector + " and the destination to the next station is " + stations + "."); //total += distance; //System.out.println("The total distance is " + total); } public static void main(String[] args) { /* if (args.length != 1) { System.err.println("usage: java TextScanner2" + "file location"); System.exit(0); } */ DataScanner1 scanner = new DataScanner1(); scanner.readFile(args[0]); //int total =+ distance; //System.out.println(""); //System.out.println("The total distance is " + scanner.getTotal()); } }

    Read the article

  • SQL SERVER – Quiz and Video – Introduction to SQL Server Security

    - by pinaldave
    This blog post is inspired from Beginning SQL Joes 2 Pros: The SQL Hands-On Guide for Beginners – SQL Exam Prep Series 70-433 – Volume 1. [Amazon] | [Flipkart] | [Kindle] | [IndiaPlaza] This is follow up blog post of my earlier blog post on the same subject - SQL SERVER – Introduction to SQL Server Security – A Primer. In the article we discussed various basics terminology of the security. The article further covers following important concepts of security. Granting Permissions Denying Permissions Revoking Permissions Above three are the most important concepts related to security and SQL Server.  There are many more things one has to learn but without beginners fundamentals one can’t learn the advanced  concepts. Let us have small quiz and check how many of you get the fundamentals right. Quiz 1) If you granted Phil control to the server, but denied his ability to create databases, what would his effective permissions be? Phil can do everything. Phil can do nothing. Phil can do everything except create databases. 2) If you granted Phil control to the server and revoked his ability to create databases, what would his effective permissions be? Phil can do everything. Phil can do nothing. Phil can do everything except create databases. 3) You have a login named James who has Control Server permission. You want to elimintate his ability to create databases without affecting any other permissions. What SQL statement would you use? ALTER LOGIN James DISABLE DROP LOGIN James DENY CREATE DATABASE To James REVOKE CREATE DATABASE To James GRANT CREATE DATABASE To James Now make sure that you write down all the answers on the piece of paper. Watch following video and read earlier article over here. If you want to change the answer you still have chance. Solution 1) 3 2) 1 3) 3 Now compare let us check the answers and compare your answers to following answers. I am very confident you will get them correct. Available at USA: Amazon India: Flipkart | IndiaPlaza Volume: 1, 2, 3, 4, 5 Please leave your feedback in the comment area for the quiz and video. Did you know all the answers of the quiz? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Joes 2 Pros, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Hadoop, NOSQL, and the Relational Model

    - by Phil Factor
    (Guest Editorial for the IT Pro/SysAdmin Newsletter)Whereas Relational Databases fit the world of commerce like a glove, it is useless to pretend that they are a perfect fit for all human endeavours. Although, with SQL Server, we’ve made great strides with indexing text, in processing spatial data and processing markup, there is still a problem in dealing efficiently with large volumes of ephemeral semi-structured data. Key-value stores such as Cassandra, Project Voldemort, and Riak are of great value for ephemeral data, and seem of equal value as a data-feed that provides aggregations to an RDBMS. However, the Document databases such as MongoDB and CouchDB are ideal for semi-structured data for which no fixed schema exists; analytics and logging are obvious examples. NoSQL products, such as MongoDB, tackle the semi-structured data problem with panache. MongoDB is designed with a simple document-oriented data model that scales horizontally across multiple servers. It doesn’t impose a schema, and relies on the application to enforce the data structure. This is another take on the old ‘EAV’ problem (where you don’t know in advance all the attributes of a particular entity) It uses a clever replica set design that allows automatic failover, and uses journaling for data durability. It allows indexing and ad-hoc querying. However, for SQL Server users, the obvious choice for handling semi-structured data is Apache Hadoop. There will soon be an ODBC Driver for Apache Hive .and an Add-in for Excel. Additionally, there are now two Hadoop-based connectors for SQL Server; the Apache Hadoop connector for SQL Server 2008 R2, and the SQL Server Parallel Data Warehouse (PDW) connector. We can connect to Hadoop process the semi-structured data and then store it in SQL Server. For one steeped in the culture of Relational SQL Databases, I might be expected to throw up my hands in the air in a gesture of contempt for a technology that was, judging by the overblown journalism on the subject, about to make my own profession as archaic as the Saggar makers bottom knocker (a potter’s assistant who helped the saggar maker to make the bottom of the saggar by placing clay in a metal hoop and bashing it). However, on the contrary, I find that I'm delighted with the advances made by the NoSQL databases in the past few years. Having the flow of ideas from the NoSQL providers will knock any trace of complacency out of the providers of Relational Databases and inspire them into back-fitting some features, such as horizontal scaling, with sharding and automatic failover into SQL-based RDBMSs. It will do the breed a power of good to benefit from all this lateral thinking.

    Read the article

< Previous Page | 58 59 60 61 62 63 64 65 66 67 68 69  | Next Page >