Search Results

Search found 6110 results on 245 pages for 'graph databases'.

Page 166/245 | < Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >

  • Post to wall as Facebook App (not as a user)?

    - by Sebastian
    I need to obtain an access_token as an App, not as an admin or user. This is so that I can post/comment/like in a the style of "[ app name ] has commented on your post". The problem is that when I attempt to get an access token (which I do successfully), I'm getting one that is for me (the admin) because I'm logged in when I attempt to call: https://graph.facebook.com/oauth/authorize?client_id=[app id]&redirect_uri=[url]&scope=publish_stream,offline_access&type=user_agent&display=popup What is the process for getting an non-expiring access token AS an app, rather than an admin? Thanks in advance

    Read the article

  • Global variables in hadoop.

    - by Deepak Konidena
    Hi, My program follows a iterative map/reduce approach. And it needs to stop if certain conditions are met. Is there anyway i can set a global variable that can be distributed across all map/reduce tasks and check if the global variable reaches the condition for completion. Something like this. While(Condition != true){ Configuration conf = getConf(); Job job = new Job(conf, "Dijkstra Graph Search"); job.setJarByClass(GraphSearch.class); job.setMapperClass(DijkstraMap.class); job.setReducerClass(DijkstraReduce.class); job.setOutputKeyClass(IntWritable.class); job.setOutputValueClass(Text.class); } Where condition is a global variable that is modified during/after each map/reduce execution.

    Read the article

  • What processes would make the selling of a hard drive that previously held sensitive data justifiable? [closed]

    - by user12583188
    Possible Duplicate: Securely erasing all data from a hard drive In my personal collection are an increasing number of relatively new drives, only put on the shelf due to upgrades; in the past I have never sold hard drives with used machines for fear of having the encrypted password databases that have been stored on them compromised, but as their numbers increase I find myself more tempted to do so (due to the $$$ I know they're worth on the used market). What tools then exist to make the recovery of data from said drives difficult to the extent that selling them could be justified? Another way of saying this would be: what tools/method exist for making the attempts at recovery of any data previously stored on a certain drive impractical? I assume that it is always possible to recover data from a drive that is in working order. I assume also there are some methods for preventing recovery of data due a program called dban, and one particular feature in macOSX that deals with permanently deleting data from a disk.

    Read the article

  • How can i simulate the production servers in my home for linux VMs [closed]

    - by user31
    I am thinking of making the small simulation of how the big companies run their system in my home environment to get the feeling. I have the server with 8GB ram , quad core processor. I am thinking of following setup if thats [possible because i have not worked with biger companies , so i want to know how can i do that I am thinking of creating 5 virtual machines VM1 will be database server and will have all databases like MySQL , postgreSQL , sqlite , mongodb and Oracle VM2 will be the web server and will have Apache and Tomcat installed VM3 will be the Filse server where i will have all the web sites file VM4 , i am thinking of as main box where i can install ptyon php java j2ee sites but not sure VM5 will have the server 22008 for c# .net applications my main idea is to be able to host the sites in php, python , java j2ee with spring Is my setup ok or i am missing few things. Please guide me with correct setup so that i can learn stuff

    Read the article

  • Experience with Intel X25-M 160GB and Oracle

    - by derobert
    We're considering building an Oracle database with 12 Intel X25-M G2 160GB drives in software RAID10. It'd be running Linux. Database gets some very heavy write activity during the early morning data load, other than that it is mostly read-only (and the read load is fairly minimal). We're currently running on 11 150GB Velociraptors (also Linux software RAID10), and are hoping the X25-M will speed up the data load. We currently have redo on different disks than the rest of the data. I'm wondering a few things: Any experience with using X25-M drives for databases? The X25-E are unfortunately beyond our budget. Would it hurt to separate redo off to some magnetic (non-SSD) drives, say 2 (raid1) or 4 (raid10) Seagate Constellations?

    Read the article

  • Adding a UIView to a UITableViewCell via cell.contentView

    - by Robert Eisinger
    I have a class called GraphView that extends UIView that basically draws a small little line chart. I need one of these graphs at the top of each section in my UITableView. So I tried by creating a separate cell at the top of one of my sections and then on that cell I did: [cell.contentView addSubview:graphView]; [graphView release]; But when I scroll down, it's like the graph is glitchy and it shows up in random spots along the UITableView. Anyone have ideas or insight? Is there a better way to incorporate another UIView into the top of each section in my UITableView?

    Read the article

  • Obtain all keys of a Neo4j index

    - by MattiSG
    I have a Neo4j database whose content is generated dynamically from a big dataset. All “entry points” nodes are indexed on a named index (IndexManager.forNodes(…)). I can therefore look up a particular “entry point” node. However, I would now like to enumerate all those specific nodes, but I can't know on which key they were indexed. Is there any way to enumerate all keys of a Neo4j Index? If not, what would be the best way to store those keys, a data type that is eminently non-graph-oriented? UPDATE (thanks for asking details :) ): the list would be more than 2 million entries. The main use case would be to never update it after an initialization step, but other use cases might need it, so it has to be somewhat scalable. Also, I would really prefer avoiding killing my current resilience abilities, so storing all keys at once, as opposed to adding them incrementally, would be a last-resort solution.

    Read the article

  • Endless saving of CoreData Context

    - by Robert
    Sometimes I noticed that a 'save:' operation an a ManagedObjectContext never returns and consumes 100% CPU. I'm using an SQL Store in a GarbageCollected environment (Mac OS X 10.6.3). The disk activity shows about 700 KB/s writing. While having a look at the folder that contains the sqlite database file the "-journal" file appears and disappears, appears and disappears, ... This is part of the call graph from the process analysis: 2203 -[NSManagedObjectContext save:] 1899 -[NSPersistentStoreCoordinator(_NSInternalMethods) executeRequest:withContext:] 1836 -[NSSQLCore executeRequest:withContext:] 1836 -[NSSQLCore saveChanges:] 1479 -[NSSQLCore performChanges] ... 335 -[NSSQLCore recordChangesInContext:] ... 20 -[NSSQLCore rollbackChanges] ... 2 -[NSSQLCore prepareForSave:] ... 62 -[NSPersistentStoreCoordinator(_NSInternalMethods) _checkRequestForStore:originalRequest:andOptimisticLocking:] ... 1 -[NSPersistentStore(_NSInternalMethods) _preflightCrossCheck] ... 184 -[NSMergePolicy resolveConflicts:] ... 120 -[NSManagedObjectContext(_NSInternalChangeProcessing) _prepareForPushChanges:] ... Everything a happening in the main GUI thread. Any ideas what I can to do to resolve the problem?

    Read the article

  • Mimicking Google's Persistant Disks -- Is this a logical FreeBSD disaster recovery strategy?

    - by Casey Jordan
    I am looking into FreeBSD to provide a more comprehensive backup and disaster recovery strategy for database servers. Ideally I want to mimic what google is doing with "Persistant disks" https://developers.google.com/compute/docs/disks#snapshots I am hoping someone who knows more about FreeBSD can validate these ideas/questions: I have read that FreeBSD can take instant disk snapshots, therefore if our databases trigger a consistent state (Block all writes, and flush buffers to disk), I would assume I could take snapshots every hour without service interruption for more than a few seconds. Is this true? Is there a way to take snapshots and back them up offsite easily? Can this be done incrementally as to save how much disk space is actually used? If a rollback needed to be done, how long does this typically take? Is a rollback also instantaneous? Thanks!

    Read the article

  • JavaScript: how to use data but to hide it so as it cannot be reused

    - by loukote
    Hi all. I've some data that i'd like to publish just on one website, ie. it should not be reused on other websites. The data is a set of numbers that change every day, our journalists work to get hard gather it. Is there any way to hide, crypt, etc. the data in a way that it cannot be reused by others? But to show it in a graph in the same time? I found the ASCII to HEX tool that could be used for (http://utenti.multimania.it/ascii2hex/). I wonder if you can suggest other ways. (Even if I have to completely change the strategy.) Many thanks!

    Read the article

  • Searching for online database software/cms

    - by ButterdBread
    I am searching for a software or CMS that manages and displays large online databases, as some kind of frontend to MySQL or any other database. It should be accessible through the browser, be as secure as possible (offering login). The data I'd like to store would be personal information such as name, adress and birthday - also I'd need to be able to add custom fields as well. Also forms and the possibility to download the data in an excel? table would be great. PHPmyadmin is not an option, it should be similar to a CRM but more closely adapted to managing database tables, searching for entries and filtering data. It should be possible to have many user accounts with different rights, with each of them being able to acces certain parts of the data and entering own data. Is there something out there, that might get close to what I imagine? I appreciate any help!

    Read the article

  • Performance Drop Lingers after Load [closed]

    - by Charles
    Possible Duplicate: How do you do Load Testing and Capacity Planning for Databases I'm noticing a drop in performance after subsequent load tests. Although our cpu and ram numbers look fine, performance seems to degrade over time as sustained load is applied to the system. If we allow more time between the load tests, the performance gets back to about 1,000 ms, but if you apply load every 3 minutes or so, it starts to degrade to a point where it takes 12,000 ms. None of the application servers are showing lingering apache processes and the number of database connections cools down to about 3 (from a sustained 20). Is there anything else I should be looking out for here?

    Read the article

  • Is directly executing SQL bad app design?

    - by Michael Lowman
    I'm developing an iOS application that's a manager/viewer for another project. The idea is the app will be able to process the data stored in a database into a number of visualizations-- the overall effect being similar to cacti. I'm making the visualizations fully user-configurable: the user defines what she wants to see and adds restrictions. She might specify, for instance, to graph a metric over the last three weeks with user accounts that are currently active and aren't based in the United States. My problem is that the only design I can think of is more or less passing direct SQL from the iOS app to the backend server to be executed against the database. I know it's bad practice and everything should be written in terms of stored procedures. But how else do I maintain enough flexiblity to keep fully user-defined queries? While the application does compose the SQL, direct SQL is never visible or injectable by the user. That's all abstracted away in UIDateTimeChoosers, UIPickerViews, and the like.

    Read the article

  • Which modules can be disabled in apache2.4 on windows

    - by j0h
    I have an Apache 2.4 webserver running on Windows. I am looking into system hardening and the config file httpd.conf. There are numerous load modules and I am wondering which modules I can safely disable for performance and / or security improvements. Some examples of things I would think I can disable are: LoadModule cgi_module others like LoadModule rewrite_module LoadModule version_module LoadModule proxy_module LoadModule setenvif_module I am not so sure they can be disabled. I am running php5 as a scripting engine, with no databases, and that is it. My loaded modules are: core mod_win32 mpm_winnt http_core mod_so mod_access_compat mod_actions mod_alias mod_allowmethods mod_asis mod_auth_basic mod_authn_core mod_authn_file mod_authz_core mod_authz_groupfile mod_authz_host mod_authz_user mod_autoindex mod_dav_lock mod_dir mod_env mod_headers mod_include mod_info mod_isapi mod_log_config mod_cache_disk mod_mime mod_negotiation mod_proxy mod_proxy_ajp mod_rewrite mod_setenvif mod_socache_shmcb mod_ssl mod_status mod_version mod_php5

    Read the article

  • Facebook "Like" on iPhone/iPad

    - by half_brick
    Has anyone implemented facebook "Like" on iPhone/iPad? I've done general Facebook Connect implementation before, but it appears they're phasing that out in favour of OAuth and the Graph API? We're trying to give users the ability to "Like" items of content in the app. Each item of content has a corresponding URL for its representation on the website. Will it be possible to implement this kind of functionality (without implementing anything on the server side)? And is there a library that will let us do this easily? Thanks

    Read the article

  • Store system passwords with easy and secure access

    - by CodeShining
    I'm having to handle several VPS/services and I always set passwords to be different and random. What kind of storage do you suggest to keep these passwords safe and let me access them easily? These passwords are used for services like databases, webserver user and so on that run customers' services, so it's really important to keep them in a safe place and strong. I'm actually storing them in a google drive spreadsheet file, describing user, password, role, service. Do you know of better solutions? I'd like to keep them on a remote service to make sure I don't have to make backup copies (in case my hdd would fail somehow). I do work on *nix platforms (so windows specific solutions are not a choice here).

    Read the article

  • Separating merged array of arithmetic and geometric series

    - by user1814037
    My friend asked me an interseting question. Given an array of positive integers in increasing order. Seperate them in two series, an arithmetic sequence and geometric sequence. The given array is such that a solution do exist. The union of numbers of the two sequence must be the given array. Both series can have common elements i.e. series need not to be disjoint. The ratio of the geometric series can be fractional. Example: Given series : 2,4,6,8,10,12,25 AP: 2,4,6,8,10,12 GP: 4,10,25 I tried taking few examples but could not reach a general way. Even tried some graph implementation by introducing edges if they follow a particular sequence but could not reach solution.

    Read the article

  • Why is this global variable not being changed???

    - by user398314
    Why does this turn up null instead of being set to the data returned from the ajax call? It must be something simple i am overlooking. var message; $(document).ready(function(){ fbFetchMessage(); alert(message); }); function fbFetchMessage(){ var url = "http://graph.facebook.com/companyname/feed?callback=?"; $.getJSON(url,function(json){ message = json.data[0].message; }); }

    Read the article

  • VPC on Windows 7 very slow network

    - by Shigg
    I have a Windows 2003 virtual machine which I use for website testing. I've just installed Windows 7 and am using the new version VPC (not xp mode). When I try to copy a file - I need to copy some big databases across - I get a file copy speed of about 20k per sec. Copying from one PC to another on the real network transfers files at 13mb per second. Any ideas what may be causing this? I've turned off differential network compression on win 7. The Virtual HD is on a seperate physical drive to the OS. Running Windows 7 64 bit on a dual xeon with 16gb ram and 10,000 rpm drives. Tried installing VPC 2007 but windows blocks it running saying its not compatable. Many thanks for any ideas.

    Read the article

  • High Availability Clustering and Virtualization

    - by tmcallaghan
    I'm trying to understand how the various virtualization vendors (specifically Amazon EC2, but also VMware and Xen) enable software vendors to provide a real HA solution in the environment where the servers are virtualized. Specifically, if I'm running any HA application (exchange, databases, etc) I need to ensure that my redundant virtual "servers" aren't located on the same physical server. Using in-house virtualization solutions (VMware, Xen, etc) I can provision accordingly as well as check the virtual - physical arrangement. I could, however, accidentally "vmotion" to the same physical hardware. With EC2, I don't even have the ability at provision time to select different physical servers. Since their Cluster Compute Instances are 1 virtual server per physical server it seems to be the only way to guarantee I don't have a false sense of redundancy. Any ideas or thoughts would be helpful. What are others doing about this problem? If the vendors provided an API where I could get something as simple as a unique physical system identifier I could at least know if I'm going to have an issue. -Tim

    Read the article

  • MySQL server simple insert/update/delete queries are taking a long time to execute

    - by ElGabbu
    We have a VPS hosting server with a MySQL server running on it. We host several databases for client's websites. Recently we have noticed that insert/update and delete queries are taking a long time to execute sometimes as close as 30 seconds. I use the following command to see these queries being executed: watch -n1 mysqladmin proc stat We have still not been able to track the root of this problem. I would apprecite if anyone had any pointers as to what we can check or improve to resolve the issue. Thanks

    Read the article

  • How do you revert a file to a revision within an integration in Perforce?

    - by tenpn
    I have two branches, let's call them mainline and dev1. I regularly integrate a file from mainline to dev1. The last-but-one time I integrated the file, it was at revision 3 in mainline. The last time, it was at revision 5. Now for mysterious reasons lost to the sands of time, I want to work in dev1 with revision 4 of the file from mainline. Is that possible? I can't integrate it across as P4V complains that all revisions have already been integrated. I've tried right-click-get this revision on the revision graph, but that only updates which version of the file I have in mainline, not in dev1.

    Read the article

  • Oracle SQL: Multiple Subqueries Unioned Without Running Original Query Multiple Times.

    - by Bob
    So I've got a very large database, and need to work on a subset ~1% of the data to dump into an excel spreadsheet to make a graph. Ideally, I could select out the subset of data and then run multiple select queries on that, which are then UNION'ed together. Is this even possible? I can't seem to find anyone else trying to do this and would improve the performance of my current query quite a bit. Right now I have something like this: SELECT ( SELECT ( SELECT( long list of requirements ) UNION SELECT( slightly different long list of requirements ) ) ) and it would be nice if i could group the commonalities of the two long requirements and have simple differences between the two select statements being unioned.

    Read the article

  • Keeping track of threads when creating them recursively

    - by 66replica
    I'm currently working on some code for my Programming Languages course. I can't post the code but I'm permitted to talk about some high level concepts that I'm struggling with and receive input on them. Basically the code is a recursive DFS on a undirected graph that I'm supposed to convert to a concurrent program. My professor already specified that I should create my threads in the recursive DFS method and then join them in another method. Basically, I'm having trouble thinking of how I should keep track of the threads I'm creating so I can join all of them in the other method. I'm thinking an array of Threads but I'm unsure how to add each new thread to the array or even if that's the right direction.

    Read the article

  • Graphing special functions in Matlab (2D Bessel)

    - by favala
    I'm trying to essentially get something like this where I can see clear ripples at the base but otherwise it's like a Gaussian: This is kind of unsatisfactory because the ripples aren't very noticeable, it has a very gritty quality that obscures the image a bit, and if you move the graph so that it's just in 2D (so it looks like a circle) I'm not even sure if it's quite like how it should be (the concentric circles seem to be more evenly spaced in the real thing). So, is there a better way to do this? a = 2*pi; [X Y] = meshgrid(-1:0.01:1,-1:0.01:1); R = sqrt(X.^2+Y.^2); f = (2*besselj(1,a*R(:))./R(:)).^2; mesh(X,Y,reshape(f,size(X))); axis vis3d;

    Read the article

< Previous Page | 162 163 164 165 166 167 168 169 170 171 172 173  | Next Page >