Search Results

Search found 59975 results on 2399 pages for 'data comparison'.

Page 560/2399 | < Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >

  • JQuery Ajax Get passing parameters

    - by George
    Hi, I am working on my first MVC application and am running into a bit of a problem. I have a data table that when a row is clicked, I want to return the detail from that row. I have a function set up as: function rowClick(item) { $("#detailInfo").data("width.dialog", 800); $.ajax({ type: "GET", contentType: "application/json; charset=utf-8", url: "<%= Url.Action("GetDetails", "WarningRecognition")%>", data: "", dataType: "json", success: function(data) {//do some stuff...and show results} } The problem I am running into is the passing of the "item". I calls the Controller function that looks like this: public JsonResult GetDetails(string sDetail) { Debug.WriteLine(Request.QueryString["sDetail"]); Debug.WriteLine("sDetail: " + sDetail); var myDetailsDao = new WarnRecogDetailsDao(); return new JsonResult { Data = myDetailsDao.SelectDetailedInfo(Convert.ToInt32(sDetail)) }; } But it never shows anything as the the "sDetail". It does hit the function but nothing is passed to it. So, I have read where you pass the parameter via the data but I have tried every combination I can think of and it never shows up. Tried: data: {"item"} data: {sDetail[item]} data: {sDetail[" + item + "]} Any help is greatly appreciated. Thanks in advance. Geo...

    Read the article

  • Somewhat lost with jquery + php + json

    - by Luis Armando
    I am starting to use the jquery $.ajax() but I can't get back what I want to...I send this: $(function(){ $.ajax({ url: "graph_data.php", type: "POST", data: "casi=56&nada=48&nuevo=98&perfecto=100&vales=50&apenas=70&yeah=60", dataType: "json", error: function (xhr, desc, exceptionobj) { document.writeln("El error de XMLHTTPRequest dice: " + xhr.responseText); }, success: function (json) { if (json.error) { alert(json.error); return; } var output = ""; for (p in json) { output += p + " : " + json[p] + "\n"; } document.writeln("Results: \n\n" + output); } }); }); and my php is: <?php $data = $_POST['data']; function array2json($data){ $json = $data; return json_encode($json); } ?> and when I execute this I come out with: Results: just like that I used to have in the php a echo array2json statement but it just gave back gibberish...I really don't know what am I doing wrong and I've googled for about 3 hours just getting basically the same stuff. Also I don't know how to pass parameters to the "data:" in the $.ajax function in another way like getting info from the web page, can anyone please help me? Edit I did what you suggested and it prints the data now thank you very much =) however, I was wondering, how can I send the data to the "data:" part in jQuery so it takes it from let's say user input, also I was checking the php documentation and it says I'm allowed to write something like: json_encode($a,JSON_HEX_TAG|JSON_HEX_APOS|JSON_HEX_QUOT|JSON_HEX_AMP) however, if I do that I get an error saying that json_encode accepts 1 parameter and I'm giving 2...any idea why? I'm using php 5.2

    Read the article

  • How to get values of xml elements?

    - by user187580
    Hi, I have some xml data and I am trying to access some elements. The structure of data is as below (using print_r($data)). I can get $data->{'parent'}->title, it works but if I try to get value of href using $data->{'parent'}->link[0]->{'@attributes'}->href .. it doesnt work .. any ideas? Thanks SimpleXMLElement Object ( [@attributes] => Array ( [children] => 29 [modules] => 0 ) [title] => Test title [link] => Array ( [0] => SimpleXMLElement Object ( [@attributes] => Array ( [href] => data.php?id=2322 [rel] => self [type] => application/xml ) ) [1] => SimpleXMLElement Object ( [@attributes] => Array ( [href] => data.php?id=2342 [rel] => alternate [type] => text/html ) ) ) [parent] => SimpleXMLElement Object ( [@attributes] => Array ( [children] => 6 [modules] => 0 ) [title] => Top [link] => Array ( [0] => SimpleXMLElement Object ( [@attributes] => Array ( [href] => /data.php?id=5763 [rel] => self [type] => application/xml ) ) [1] => SimpleXMLElement Object ( [@attributes] => Array ( [href] => /data.php?id=2342 [rel] => alternate [type] => text/html ) ) ) ) )

    Read the article

  • querying huge database table takes too much of time in mysql

    - by Vijay
    Hi all, I am running sql queries on a mysql db table that has 110Mn+ unique records for whole day. Problem: Whenever I run any query with "where" clause it takes at least 30-40 mins. Since I want to generate most of data on the next day, I need access to whole db table. Could you please guide me to optimize / restructure the deployment model? Site description: mysql Ver 14.12 Distrib 5.0.24, for pc-linux-gnu (i686) using readline 5.0 4 GB RAM, Dual Core dual CPU 3GHz RHEL 3 my.cnf contents : [root@reports root]# cat /etc/my.cnf [mysqld] datadir=/data/mysql/data/ socket=/tmp/mysql.sock sort_buffer_size = 2000000 table_cache = 1024 key_buffer = 128M myisam_sort_buffer_size = 64M # Default to using old password format for compatibility with mysql 3.x # clients (those using the mysqlclient10 compatibility package). old_passwords=1 [mysql.server] user=mysql basedir=/data/mysql/data/ [mysqld_safe] err-log=/data/mysql/data/mysqld.log pid-file=/data/mysql/data/mysqld.pid [root@reports root]# DB table details: CREATE TABLE `RAW_LOG_20100504` ( `DT` date default NULL, `GATEWAY` varchar(15) default NULL, `USER` bigint(12) default NULL, `CACHE` varchar(12) default NULL, `TIMESTAMP` varchar(30) default NULL, `URL` varchar(60) default NULL, `VERSION` varchar(6) default NULL, `PROTOCOL` varchar(6) default NULL, `WEB_STATUS` int(5) default NULL, `BYTES_RETURNED` int(10) default NULL, `RTT` int(5) default NULL, `UA` varchar(100) default NULL, `REQ_SIZE` int(6) default NULL, `CONTENT_TYPE` varchar(50) default NULL, `CUST_TYPE` int(1) default NULL, `DEL_STATUS_DEVICE` int(1) default NULL, `IP` varchar(16) default NULL, `CP_FLAG` int(1) default NULL, `USER_LOCATE` bigint(15) default NULL ) ENGINE=MyISAM DEFAULT CHARSET=latin1 MAX_ROWS=200000000; Thanks in advance! Regards,

    Read the article

  • Java/Hibernate using interfaces over the entities.

    - by Dennetik
    I am using annoted Hibernate, and I'm wondering whether the following is possible. I have to set up a series of interfaces representing the objects that can be persisted, and an interface for the main database class containing several operations for persisting these objects (... an API for the database). Below that, I have to implement these interfaces, and persist them with Hibernate. So I'll have, for example: public interface Data { public String getSomeString(); public void setSomeString(String someString); } @Entity public class HbnData implements Data, Serializable { @Column(name = "some_string") private String someString; public String getSomeString() { return this.someString; } public void setSomeString(String someString) { this.someString = someString; } } Now, this works fine, sort of. The trouble comes when I want nested entities. The interface of what I'd want is easy enough: public interface HasData { public Data getSomeData(); public void setSomeData(Data someData); } But when I implement the class, I can follow the interface, as below, and get an error from Hibernate saying it doesn't know the class "Data". @Entity public class HbnHasData implements HasData, Serializable { @OneToOne(cascade = CascadeType.ALL) private Data someData; public Data getSomeData() { return this.someData; } public void setSomeData(Data someData) { this.someData = someData; } } The simple change would be to change the type from "Data" to "HbnData", but that would obviously break the interface implementation, and thus make the abstraction impossible. Can anyone explain to me how to implement this in a way that it will work with Hibernate?

    Read the article

  • how to emulate thread local storage at user space in C++ ?

    - by vprajan
    I am working on a mobile platform over Nucleus RTOS. It uses Nucleus Threading system but it doesn't have support for explicit thread local storage i.e, TlsAlloc, TlsSetValue, TlsGetValue, TlsFree APIs. The platform doesn't have user space pthreads as well. I found that __thread storage modifier is present in most of the C++ compilers. But i don't know how to make it work for my kind of usage. How does __thread keyword can be mapped with explicit thread local storage? I read many articles but nothing is so clear for giving me the following basic information will __thread variable different for each thread ? How to write to that and read from it ? does each thread has exactly one copy of the variable ? following is the pthread based implementation: pthread_key_t m_key; struct Data : Noncopyable { Data(T* value, void* owner) : value(value), owner(owner) {} int* value; }; inline ThreadSpecific() { int error = pthread_key_create(&m_key, destroy); if (error) CRASH(); } inline ~ThreadSpecific() { pthread_key_delete(m_key); // Does not invoke destructor functions. } inline T* get() { Data* data = static_cast<Data*>(pthread_getspecific(m_key)); return data ? data->value : 0; } inline void set(T* ptr) { ASSERT(!get()); pthread_setspecific(m_key, new Data(ptr, this)); } How to make the above code use __thread way to set & get specific value ? where/when does the create & delete happen? If this is not possible, how to write custom pthread_setspecific, pthread_getspecific kind of APIs. I tried using a C++ global map and index it uniquely for each thread and retrieved data from it. But it didn't work well.

    Read the article

  • JSON datas not loaded into content page

    - by Chelseawillrecover
    I am trying to append my JSON datas into the content page but the datas are not loaded. When I use console.log I can see the data appearing. JS: $(document).on('pagebeforeshow', '#blogposts', function() { //$.mobile.showPageLoadingMsg(); $.ajax({ url: "http://howtodeployit.com/category/daily-devotion/?json=recentstories&callback=", dataType: "json", jsonpCallback: 'successCallback', async: true, beforeSend: function() { $.mobile.showPageLoadingMsg(true); }, complete: function() { $.mobile.hidePageLoadingMsg(); }, success:function(data){ $.each(data.posts, function(i, val) { console.log(val.title); $('<li/>').append([$("<h3>", {html: val.title}),$("<p>", {html: val.excerpt})]).wrapInner('<a href="#devotionpost" onclick="showPost(' + val.id + ')"></a>').appendTo('#postList'); return (i !== 4); console.log('#postlist'); }); }, error: function(data) { alert("Data not found"); } }); }); HTML: <!-- Page: Blog Posts --> <div id="blogposts" data-role="page"> <div data-role="header" data-position="fixed"> <h2>My Blog Posts</h2> </div><!-- header --> <div data-role="content"> <ul id="postlist"> </ul><!-- content --> </div> <div class="load-more">Load More Posts...</div> </div><!-- page -->

    Read the article

  • Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • Pickled my dictionary from ZODB but i got a less in size one?

    - by Someone Someoneelse
    I use ZODB and i want to copy my 'database_1.fs' file to another 'database_2.fs', so I opened the root dictionary of that 'database_1.fs' and I (pickle.dump) it in a text file. Then I (pickle.load) it in a dictionary-variable, in the end I update the root dictionary of the other 'database_2.fs' with the dictionary-variable. It works, but I wonder why the size of the 'database_1.fs' not equal to the size of the other 'database_2.fs'. They are still copies of each other. def openstorage(store): #opens the database data={} data['file']=filestorage data['db']=DB(data['file']) data['conn']=data['db'].open() data['root']=data['conn'].root() return data def getroot(dicty): return dicty['root'] def closestorage(dicty): #close the database after Saving transaction.commit() dicty['file'].close() dicty['db'].close() dicty['conn'].close() transaction.get().abort() then that's what i do:- import pickle loc1='G:\\database_1.fs' op1=openstorage(loc1) root1=getroot(op1) loc2='G:database_2.fs' op2=openstorage(loc2) root2=getroot(op2) >>> len(root1) 215 >>> len(root2) 0 pickle.dump( root1, open( "save.txt", "wb" )) item=pickle.load( open( "save.txt", "rb" ) ) #now item is a dictionary root2.update(item) closestorage(op1) closestorage(op2) #after I open both of the databases #I get the same keys in both databases #But `database_2.fs` is smaller that `database_2.fs` in size I mean. >>> len(root2)==len(root1)==215 #they have the same keys True Note: (1) there are persistent dictionaries and lists in the original database_1.fs (2) both of them have the same length and the same indexes.

    Read the article

  • Memory increases with Java UDP Server

    - by Trevor
    I have a simple UDP server that creates a new thread for processing incoming data. While testing it by sending about 100 packets/second I notice that it's memory usage continues to increase. Is there any leak evident from my code below? Here is the code for the server. public class UDPServer { public static void main(String[] args) { UDPServer server = new UDPServer(15001); server.start(); } private int port; public UDPServer(int port) { this.port = port; } public void start() { try { DatagramSocket ss = new DatagramSocket(this.port); while(true) { byte[] data = new byte[1412]; DatagramPacket receivePacket = new DatagramPacket(data, data.length); ss.receive(receivePacket); new DataHandler(receivePacket.getData()).start(); } } catch (IOException e) { e.printStackTrace(); } } } Here is the code for the new thread that processes the data. For now, the run() method doesn't do anything. public class DataHandler extends Thread { private byte[] data; public DataHandler(byte[] data) { this.data = data; } @Override public void run() { System.out.println("run"); } }

    Read the article

  • Nginx Rails app can't deploy

    - by user3596718
    I have an issue with my rails application running with passenger and nginx hosted in Ubuntu 12.04. In the nginx.conf file below, my "example.com" (Regular HTML) and "redmine.example.com" (Rails app) are working perfectly, but my "crete.example.com" (Another Rails app) is showing "502 bad gateway". I have them both hosted in /var/data with the same permissions and ownerships, also tried different ports, I can't think of something else to try. worker_processes 1; events { worker_connections 1024; } http { passenger_root /usr/lib/ruby/vendor_ruby/phusion_passenger/locations.ini; include mime.types; default_type application/octet-stream; sendfile on; keepalive_timeout 65; server{ listen 80; server_name example.com; root /opt/nginx/html; } server{ server_name redmine.example.com; root /var/data/redmine/public; passenger_enabled on; location ~ ^/<SUBURI>(/.*|$){ alias /var/data/redmine/public$1; passenger_base_uri /redmine; passenger_app_root /var/data/redmine; passenger_document_root /var/data/redmine/public; passenger_enabled on;} } server{ server_name crete.example.com; root /var/data/crete/public; passenger_enabled on; location ~ ^/<SUBURI>(/.*|$){ alias /var/data/crete/public$1; passenger_base_uri /crete; passenger_app_root /var/data/crete; passenger_document_root /var/data/crete/public; passenger_enabled on;} } } This are my Ruby and Rails versions: ruby 2.0.0p451 (2014-02-24 revision 45167) [x86_64-linux] Rails 4.1.0 My nginx error.log 2014/05/02 12:29:50 [error] 3343#0: *4 upstream prematurely closed connection while reading response header from upstream, client: xxx.xx.xx.xx, server: crete.example.com, request: "GET / HTTP/1.1", upstream: "passenger:/tmp/passenger.1.0.3 323/generation-0/request:", host: "crete.example.com" Any other conf file you might need to solve this don't hesitate to ask.

    Read the article

  • Making a Delete and Reply button in Jquery

    - by Branko Ostojic
    this is my second post on the website. Of all other sites i tried, this one gave the most accurate and useful information! I'm in a bit of a trouble with buttons, i have a task to make an inbox and to add a "reply" and "delete" button into every instance of the message. I was indeed wandering if there is a better way to do that than forcing the HTML code into the script, because every message is dynamically generated. Any help and/or suggestions would be very appreciated!(The objects are called from a JSON file). $(document).ready(function(){ $.getJSON('public/js/data.json', function(json){ $.each(json.data, function(i, data){ var output = ''; if(data.from.id != '234' && data.from.name != 'Alan Ford'){ $("#inbox").append( output += '<div class="post">'+ '<div class="h1">'+data.from.name+' - '+data.subject+'</div>'+ //this gives the name of the person who sent the message and the subject '<div class="content">'+data.message_formatted+'</div>'+ //The content of the message //buttons should be squeezed left of the date //this gives the date of the message sent '<div class="time">'+data.date_sent_formatted.formatted+'</div>'+ '</div>' ); }}); }); }); var date_sent=convertToDateTime(); function delete_message(id){ console.log('Delete message with id: '+id); } function reply_message(id, sender){ console.log('Message id: '+id); console.log('Reply to: '+sender); } The complete code in the JSFiddle . Just copy/pasted!

    Read the article

  • Heapsort not working in Python for list of strings using heapq module

    - by VSN
    I was reading the python 2.7 documentation when I came across the heapq module. I was interested in the heapify() and the heappop() methods. So, I decided to write a simple heapsort program for integers: from heapq import heapify, heappop user_input = raw_input("Enter numbers to be sorted: ") data = map (int, user_input.split(",")) new_data = [] for i in range(len(data)): heapify(data) new_data.append(heappop(data)) print new_data This worked like a charm. To make it more interesting, I thought I would take away the integer conversion and leave it as a string. Logically, it should make no difference and the code should work as it did for integers: from heapq import heapify, heappop user_input = raw_input("Enter numbers to be sorted: ") data = user_input.split(",") new_data = [] for i in range(len(data)): heapify(data) print data new_data.append(heappop(data)) print new_data Note: I added a print statement in the for loop to see the heapified list. Here's the output when I ran the script: `$ python heapsort.py Enter numbers to be sorted: 4, 3, 1, 9, 6, 2 [' 1', ' 3', ' 2', ' 9', ' 6', '4'] [' 2', ' 3', '4', ' 9', ' 6'] [' 3', ' 6', '4', ' 9'] [' 6', ' 9', '4'] [' 9', '4'] ['4'] [' 1', ' 2', ' 3', ' 6', ' 9', '4']` The reasoning I applied was that since the strings are being compared, the tree should be the same if they were numbers. As is evident, the heapify didn't work correctly after the third iteration. Could someone help me figure out if I am missing something here? I'm running Python 2.4.5 on RedHat 3.4.6-9. Thanks, VSN

    Read the article

  • "render as JSON" is display JSON as text instead of returning it to AJAX call as expected

    - by typoknig
    I'm navigating to the index action of MyController. Some of the on the index page I'm making an AJAX call back to myAction in MyController. I expect myAction action to return some data as JSON to my AJAX call so I can do something with the data client side, but instead of returning the data as JSON like I want, the data is being displayed as text. Example of my Grails controller: class MyController { def index() { render( view: "myView" ) } def myAction { def mapOfStuff = [ "foo": "foo", "bar":] render mapOfStuff as JSON } } Example of my JavaScript: $( function() { function callMyAction() { $.ajax({ dataType: 'json', url: base_url + '/myController/myAction', success: function( data ) { $(function() { if( data.foo ) { alert( data.foo ); } if( data.bar ) { alert( data.bar ); } }); } }); } }); What I expect is that my page will render, then my JavaScript will be called, then two alerts will display. Instead the JSON array is displayed as text in my browser window: {"foo":"foo","bar":"bar"} At this point the last segment of the URL in my address bar is myAction and not index. Now if I manually enter the URL of the index page and press refresh, all works as expected. I have half a dozen AJAX calls I do the exact same way and none of them are having problems. What is the deal here? UPDATE: I have noticed something. When I set a break point in the index action of MyController and another one in the myAction action, the break point in myAction gets hit BEFORE the break point in index, even though I am navigating to the index. This is obviously closer to the root cause of my problem, but why is it happening?

    Read the article

  • PHP - Use isset inside function not working..?

    - by pnichols
    I have a PHP script that when loaded, check first if it was loaded via a POST, if not if GET['id'] is a number. Now I know I could do this like this: if(isset($_GET['id']) AND isNum($_GET['id'])) { ... } function isNum($data) { $data = sanitize($data); if ( ctype_digit($data) ) { return true; } else { return false; } } But I would like to do it this way: if(isNum($_GET['id'])) { ... } function isNum($data) { if ( isset($data) ) { $data = sanitize($data); if ( ctype_digit($data) ) { return true; } else { return false; } } else { return false; } } When I try it this way, if $_GET['id'] isn't set, I get a warning of undefined index: id... It's like as soon as I put my $_GET['id'] within my function call, it sends a warning... Even though my function will check if that var is set or not... Is there another way to do what I want to do, or am I forced to always check isset then add my other requirements..??

    Read the article

  • null reference problems with c#

    - by alex
    Hi: In one of my window form, I created an instance of a class to do some works in the background. I wanted to capture the debug messages in that class and displayed in the textbox in the window form. Here is what I did: class A //window form class { public void startBackGroundTask() { B backGroundTask = new B(this); } public void updateTextBox(string data) { if (data != null) { if (this.Textbox.InvokeRequired) { appendUIDelegate updateDelegate = new appendUIDelegate(updateUI); try { this.Invoke(updateDelegate, data); } catch (Exception e) { Console.WriteLine(e.Message); } } else { updateUI(data); } } } private void updateUI(string data) { if (this.Textbox.InvokeRequired) { this.Textbox.Invoke(new appendUIDelegate(this.updateUI), data); } else { //update the text box this.Textbox.AppendText(data); this.Textbox.AppendText(Environment.NewLine); } } private delegate void appendUIDelegate(string data); } class B // background task { A curUI; public b( A UI) { curUI = UI; } private void test() { //do some works here then log the debug message to UI. curUI.updateTextBox("message); } } I keep getting a null reference exception after this.Invoke(updateDelegate, data); is called. I know passing "this" as a parameter is strange. But I want to send the debug message to my window form. Please help. Thanks

    Read the article

  • SQL Server: Why use shorter VARCHAR(n) fields?

    - by chryss
    It is frequently advised to choose database field sizes to be as narrow as possible. I am wondering to what degree this applies to SQL Server 2005 VARCHAR columns: Storing 10-letter English words in a VARCHAR(255) field will not take up more storage than in a VARCHAR(10) field. Are there other reasons to restrict the size of VARCHAR fields to stick as closely as possible to the size of the data? I'm thinking of Performance: Is there an advantage to using a smaller n when selecting, filtering and sorting on the data? Memory, including on the application side (C++)? Style/validation: How important do you consider restricting colunm size to force non-sensical data imports to fail (such as 200-character surnames)? Anything else? Background: I help data integrators with the design of data flows into a database-backed system. They have to use an API that restricts their choice of data types. For character data, only VARCHAR(n) with n <= 255 is available; CHAR, NCHAR, NVARCHAR and TEXT are not. We're trying to lay down some "good practices" rules, and the question has come up if there is a real detriment to using VARCHAR(255) even for data where real maximum sizes will never exceed 30 bytes or so. Typical data volumes for one table are 1-10 Mio records with up to 150 attributes. Query performance (SELECT, with frequently extensive WHERE clauses) and application-side retrieval performance are paramount.

    Read the article

  • div content change only jquery Mobile

    - by user3659748
    I have that : <div data-role="page" id="Home"> <div data-role="header" > <h2 class="header">My app</h2> </div> <div data-role="content"> </div> <div data-role="footer" data-position="fixed"> <div data-role="navbar"> <ul> <li><a href="partials/home.html" data-icon="home" data-transition="slide">Home</a></li> <li><a href="partials/about.html" data-icon="info">About</a></li> </ul> </div> </div> </div> </body> I want when a users click on links le content slide on a div content of for exemple home.html who have just that : home that is possible or not ? Thanks :)

    Read the article

  • Tigther code - javascript object array

    - by Scott Silvi
    Inside the callback of a $.getJSON call, I have the code outlined below. The first for block aggregates 'total' & assigns values to sov[i]. The map function calculates the percentage of total. I then instantiate a variable called sovData. With the jQuery Flot graph, any objects that are empty aren't added to the pie chart, so this works for up to 7 different slices/datasets. What I'd like to do is only initialize the ones I need (e.g. sovData would have up to 'howMany - 1' (kws.length -1 ) objects inside of it, likely via something similar to dashboards[i] & sov[i]. How would I do this? Code: var sov = [], howMany = kws.length, total = 0, i = 0; for ( i; i < howMany; i++) { total += sov[ i ] = +parseInt(data.sov['sov' + ( i+1 ) ],10) || 0; } var dashboards = data.dashboards; sov = $.map( sov, function(v) { var s = Math.round( ( (v / total) * 10e3 ) / 100); return s < 1 ? 1 : s; }); var sovData = [{ label : dashboards[0], data : sov[0] }, { label : dashboards[1], data : sov[1] }, { label : dashboards[2], data : sov[2] }, { label : dashboards[3], data : sov[3] }, { label : dashboards[4], data : sov[4] }, { label : dashboards[5], data : sov[5] }, { label : dashboards[6], data : sov[6] } ]

    Read the article

  • Introducing the Earthquake Locator – A Bing Maps Silverlight Application, part 1

    - by Bobby Diaz
    Update: Live demo and source code now available!  The recent wave of earthquakes (no pun intended) being reported in the news got me wondering about the frequency and severity of earthquakes around the world. Since I’ve been doing a lot of Silverlight development lately, I decided to scratch my curiosity with a nice little Bing Maps application that will show the location and relative strength of recent seismic activity. Here is a list of technologies this application will utilize, so be sure to have everything downloaded and installed if you plan on following along. Silverlight 3 WCF RIA Services Bing Maps Silverlight Control * Managed Extensibility Framework (optional) MVVM Light Toolkit (optional) log4net (optional) * If you are new to Bing Maps or have not signed up for a Developer Account, you will need to visit www.bingmapsportal.com to request a Bing Maps key for your application. Getting Started We start out by creating a new Silverlight Application called EarthquakeLocator and specify that we want to automatically create the Web Application Project with RIA Services enabled. I cleaned up the web app by removing the Default.aspx and EarthquakeLocatorTestPage.html. Then I renamed the EarthquakeLocatorTestPage.aspx to Default.aspx and set it as my start page. I also set the development server to use a specific port, as shown below. RIA Services Next, I created a Services folder in the EarthquakeLocator.Web project and added a new Domain Service Class called EarthquakeService.cs. This is the RIA Services Domain Service that will provide earthquake data for our client application. I am not using LINQ to SQL or Entity Framework, so I will use the <empty domain service class> option. We will be pulling data from an external Atom feed, but this example could just as easily pull data from a database or another web service. This is an important distinction to point out because each scenario I just mentioned could potentially use a different Domain Service base class (i.e. LinqToSqlDomainService<TDataContext>). Now we can start adding Query methods to our EarthquakeService that pull data from the USGS web site. Here is the complete code for our service class: using System; using System.Collections.Generic; using System.IO; using System.Linq; using System.ServiceModel.Syndication; using System.Web.DomainServices; using System.Web.Ria; using System.Xml; using log4net; using EarthquakeLocator.Web.Model;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// Provides earthquake data to client applications.     /// </summary>     [EnableClientAccess()]     public class EarthquakeService : DomainService     {         private static readonly ILog log = LogManager.GetLogger(typeof(EarthquakeService));           // USGS Data Feeds: http://earthquake.usgs.gov/earthquakes/catalogs/         private const string FeedForPreviousDay =             "http://earthquake.usgs.gov/earthquakes/catalogs/1day-M2.5.xml";         private const string FeedForPreviousWeek =             "http://earthquake.usgs.gov/earthquakes/catalogs/7day-M2.5.xml";           /// <summary>         /// Gets the earthquake data for the previous week.         /// </summary>         /// <returns>A queryable collection of <see cref="Earthquake"/> objects.</returns>         public IQueryable<Earthquake> GetEarthquakes()         {             var feed = GetFeed(FeedForPreviousWeek);             var list = new List<Earthquake>();               if ( feed != null )             {                 foreach ( var entry in feed.Items )                 {                     var quake = CreateEarthquake(entry);                     if ( quake != null )                     {                         list.Add(quake);                     }                 }             }               return list.AsQueryable();         }           /// <summary>         /// Creates an <see cref="Earthquake"/> object for each entry in the Atom feed.         /// </summary>         /// <param name="entry">The Atom entry.</param>         /// <returns></returns>         private Earthquake CreateEarthquake(SyndicationItem entry)         {             Earthquake quake = null;             string title = entry.Title.Text;             string summary = entry.Summary.Text;             string point = GetElementValue<String>(entry, "point");             string depth = GetElementValue<String>(entry, "elev");             string utcTime = null;             string localTime = null;             string depthDesc = null;             double? magnitude = null;             double? latitude = null;             double? longitude = null;             double? depthKm = null;               if ( !String.IsNullOrEmpty(title) && title.StartsWith("M") )             {                 title = title.Substring(2, title.IndexOf(',')-3).Trim();                 magnitude = TryParse(title);             }             if ( !String.IsNullOrEmpty(point) )             {                 var values = point.Split(' ');                 if ( values.Length == 2 )                 {                     latitude = TryParse(values[0]);                     longitude = TryParse(values[1]);                 }             }             if ( !String.IsNullOrEmpty(depth) )             {                 depthKm = TryParse(depth);                 if ( depthKm != null )                 {                     depthKm = Math.Round((-1 * depthKm.Value) / 100, 2);                 }             }             if ( !String.IsNullOrEmpty(summary) )             {                 summary = summary.Replace("</p>", "");                 var values = summary.Split(                     new string[] { "<p>" },                     StringSplitOptions.RemoveEmptyEntries);                   if ( values.Length == 3 )                 {                     var times = values[1].Split(                         new string[] { "<br>" },                         StringSplitOptions.RemoveEmptyEntries);                       if ( times.Length > 0 )                     {                         utcTime = times[0];                     }                     if ( times.Length > 1 )                     {                         localTime = times[1];                     }                       depthDesc = values[2];                     depthDesc = "Depth: " + depthDesc.Substring(depthDesc.IndexOf(":") + 2);                 }             }               if ( latitude != null && longitude != null )             {                 quake = new Earthquake()                 {                     Id = entry.Id,                     Title = entry.Title.Text,                     Summary = entry.Summary.Text,                     Date = entry.LastUpdatedTime.DateTime,                     Url = entry.Links.Select(l => Path.Combine(l.BaseUri.OriginalString,                         l.Uri.OriginalString)).FirstOrDefault(),                     Age = entry.Categories.Where(c => c.Label == "Age")                         .Select(c => c.Name).FirstOrDefault(),                     Magnitude = magnitude.GetValueOrDefault(),                     Latitude = latitude.GetValueOrDefault(),                     Longitude = longitude.GetValueOrDefault(),                     DepthInKm = depthKm.GetValueOrDefault(),                     DepthDesc = depthDesc,                     UtcTime = utcTime,                     LocalTime = localTime                 };             }               return quake;         }           private T GetElementValue<T>(SyndicationItem entry, String name)         {             var el = entry.ElementExtensions.Where(e => e.OuterName == name).FirstOrDefault();             T value = default(T);               if ( el != null )             {                 value = el.GetObject<T>();             }               return value;         }           private double? TryParse(String value)         {             double d;             if ( Double.TryParse(value, out d) )             {                 return d;             }             return null;         }           /// <summary>         /// Gets the feed at the specified URL.         /// </summary>         /// <param name="url">The URL.</param>         /// <returns>A <see cref="SyndicationFeed"/> object.</returns>         public static SyndicationFeed GetFeed(String url)         {             SyndicationFeed feed = null;               try             {                 log.Debug("Loading RSS feed: " + url);                   using ( var reader = XmlReader.Create(url) )                 {                     feed = SyndicationFeed.Load(reader);                 }             }             catch ( Exception ex )             {                 log.Error("Error occurred while loading RSS feed: " + url, ex);             }               return feed;         }     } }   The only method that will be generated in the client side proxy class, EarthquakeContext, will be the GetEarthquakes() method. The reason being that it is the only public instance method and it returns an IQueryable<Earthquake> collection that can be consumed by the client application. GetEarthquakes() calls the static GetFeed(String) method, which utilizes the built in SyndicationFeed API to load the external data feed. You will need to add a reference to the System.ServiceModel.Web library in order to take advantage of the RSS/Atom reader. The API will also allow you to create your own feeds to serve up in your applications. Model I have also created a Model folder and added a new class, Earthquake.cs. The Earthquake object will hold the various properties returned from the Atom feed. Here is a sample of the code for that class. Notice the [Key] attribute on the Id property, which is required by RIA Services to uniquely identify the entity. using System; using System.Collections.Generic; using System.Linq; using System.Runtime.Serialization; using System.ComponentModel.DataAnnotations;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     [DataContract]     public class Earthquake     {         /// <summary>         /// Gets or sets the id.         /// </summary>         /// <value>The id.</value>         [Key]         [DataMember]         public string Id { get; set; }           /// <summary>         /// Gets or sets the title.         /// </summary>         /// <value>The title.</value>         [DataMember]         public string Title { get; set; }           /// <summary>         /// Gets or sets the summary.         /// </summary>         /// <value>The summary.</value>         [DataMember]         public string Summary { get; set; }           // additional properties omitted     } }   View Model The recent trend to use the MVVM pattern for WPF and Silverlight provides a great way to separate the data and behavior logic out of the user interface layer of your client applications. I have chosen to use the MVVM Light Toolkit for the Earthquake Locator, but there are other options out there if you prefer another library. That said, I went ahead and created a ViewModel folder in the Silverlight project and added a EarthquakeViewModel class that derives from ViewModelBase. Here is the code: using System; using System.Collections.ObjectModel; using System.ComponentModel.Composition; using System.ComponentModel.Composition.Hosting; using Microsoft.Maps.MapControl; using GalaSoft.MvvmLight; using EarthquakeLocator.Web.Model; using EarthquakeLocator.Web.Services;   namespace EarthquakeLocator.ViewModel {     /// <summary>     /// Provides data for views displaying earthquake information.     /// </summary>     public class EarthquakeViewModel : ViewModelBase     {         [Import]         public EarthquakeContext Context;           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         public EarthquakeViewModel()         {             var catalog = new AssemblyCatalog(GetType().Assembly);             var container = new CompositionContainer(catalog);             container.ComposeParts(this);             Initialize();         }           /// <summary>         /// Initializes a new instance of the <see cref="EarthquakeViewModel"/> class.         /// </summary>         /// <param name="context">The context.</param>         public EarthquakeViewModel(EarthquakeContext context)         {             Context = context;             Initialize();         }           private void Initialize()         {             MapCenter = new Location(20, -170);             ZoomLevel = 2;         }           #region Private Methods           private void OnAutoLoadDataChanged()         {             LoadEarthquakes();         }           private void LoadEarthquakes()         {             var query = Context.GetEarthquakesQuery();             Context.Earthquakes.Clear();               Context.Load(query, (op) =>             {                 if ( !op.HasError )                 {                     foreach ( var item in op.Entities )                     {                         Earthquakes.Add(item);                     }                 }             }, null);         }           #endregion Private Methods           #region Properties           private bool autoLoadData;         /// <summary>         /// Gets or sets a value indicating whether to auto load data.         /// </summary>         /// <value><c>true</c> if auto loading data; otherwise, <c>false</c>.</value>         public bool AutoLoadData         {             get { return autoLoadData; }             set             {                 if ( autoLoadData != value )                 {                     autoLoadData = value;                     RaisePropertyChanged("AutoLoadData");                     OnAutoLoadDataChanged();                 }             }         }           private ObservableCollection<Earthquake> earthquakes;         /// <summary>         /// Gets the collection of earthquakes to display.         /// </summary>         /// <value>The collection of earthquakes.</value>         public ObservableCollection<Earthquake> Earthquakes         {             get             {                 if ( earthquakes == null )                 {                     earthquakes = new ObservableCollection<Earthquake>();                 }                   return earthquakes;             }         }           private Location mapCenter;         /// <summary>         /// Gets or sets the map center.         /// </summary>         /// <value>The map center.</value>         public Location MapCenter         {             get { return mapCenter; }             set             {                 if ( mapCenter != value )                 {                     mapCenter = value;                     RaisePropertyChanged("MapCenter");                 }             }         }           private double zoomLevel;         /// <summary>         /// Gets or sets the zoom level.         /// </summary>         /// <value>The zoom level.</value>         public double ZoomLevel         {             get { return zoomLevel; }             set             {                 if ( zoomLevel != value )                 {                     zoomLevel = value;                     RaisePropertyChanged("ZoomLevel");                 }             }         }           #endregion Properties     } }   The EarthquakeViewModel class contains all of the properties that will be bound to by the various controls in our views. Be sure to read through the LoadEarthquakes() method, which handles calling the GetEarthquakes() method in our EarthquakeService via the EarthquakeContext proxy, and also transfers the loaded entities into the view model’s Earthquakes collection. Another thing to notice is what’s going on in the default constructor. I chose to use the Managed Extensibility Framework (MEF) for my composition needs, but you can use any dependency injection library or none at all. To allow the EarthquakeContext class to be discoverable by MEF, I added the following partial class so that I could supply the appropriate [Export] attribute: using System; using System.ComponentModel.Composition;   namespace EarthquakeLocator.Web.Services {     /// <summary>     /// The client side proxy for the EarthquakeService class.     /// </summary>     [Export]     public partial class EarthquakeContext     {     } }   One last piece I wanted to point out before moving on to the user interface, I added a client side partial class for the Earthquake entity that contains helper properties that we will bind to later: using System;   namespace EarthquakeLocator.Web.Model {     /// <summary>     /// Represents an earthquake occurrence and related information.     /// </summary>     public partial class Earthquake     {         /// <summary>         /// Gets the location based on the current Latitude/Longitude.         /// </summary>         /// <value>The location.</value>         public string Location         {             get { return String.Format("{0},{1}", Latitude, Longitude); }         }           /// <summary>         /// Gets the size based on the Magnitude.         /// </summary>         /// <value>The size.</value>         public double Size         {             get { return (Magnitude * 3); }         }     } }   View Now the fun part! Usually, I would create a Views folder to place all of my View controls in, but I took the easy way out and added the following XAML code to the default MainPage.xaml file. Be sure to add the bing prefix associating the Microsoft.Maps.MapControl namespace after adding the assembly reference to your project. The MVVM Light Toolkit project templates come with a ViewModelLocator class that you can use via a static resource, but I am instantiating the EarthquakeViewModel directly in my user control. I am setting the AutoLoadData property to true as a way to trigger the LoadEarthquakes() method call. The MapItemsControl found within the <bing:Map> control binds its ItemsSource property to the Earthquakes collection of the view model, and since it is an ObservableCollection<T>, we get the automatic two way data binding via the INotifyCollectionChanged interface. <UserControl x:Class="EarthquakeLocator.MainPage"     xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"     xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"     xmlns:d="http://schemas.microsoft.com/expression/blend/2008"     xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"     xmlns:bing="clr-namespace:Microsoft.Maps.MapControl;assembly=Microsoft.Maps.MapControl"     xmlns:vm="clr-namespace:EarthquakeLocator.ViewModel"     mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480" >     <UserControl.Resources>         <DataTemplate x:Key="EarthquakeTemplate">             <Ellipse Fill="Red" Stroke="Black" StrokeThickness="1"                      Width="{Binding Size}" Height="{Binding Size}"                      bing:MapLayer.Position="{Binding Location}"                      bing:MapLayer.PositionOrigin="Center">                 <ToolTipService.ToolTip>                     <StackPanel>                         <TextBlock Text="{Binding Title}" FontSize="14" FontWeight="Bold" />                         <TextBlock Text="{Binding UtcTime}" />                         <TextBlock Text="{Binding LocalTime}" />                         <TextBlock Text="{Binding DepthDesc}" />                     </StackPanel>                 </ToolTipService.ToolTip>             </Ellipse>         </DataTemplate>     </UserControl.Resources>       <UserControl.DataContext>         <vm:EarthquakeViewModel AutoLoadData="True" />     </UserControl.DataContext>       <Grid x:Name="LayoutRoot">           <bing:Map x:Name="map" CredentialsProvider="--Your-Bing-Maps-Key--"                   Center="{Binding MapCenter, Mode=TwoWay}"                   ZoomLevel="{Binding ZoomLevel, Mode=TwoWay}">             <bing:MapItemsControl ItemsSource="{Binding Earthquakes}"                                   ItemTemplate="{StaticResource EarthquakeTemplate}" />         </bing:Map>       </Grid> </UserControl>   The EarthquakeTemplate defines the Ellipse that will represent each earthquake, the Width and Height that are determined by the Magnitude, the Position on the map, and also the tooltip that will appear when we mouse over each data point. Running the application will give us the following result (shown with a tooltip example): That concludes this portion of our show but I plan on implementing additional functionality in later blog posts. Be sure to come back soon to see the next installments in this series. Enjoy!   Additional Resources USGS Earthquake Data Feeds Brad Abrams shows how RIA Services and MVVM can work together

    Read the article

  • How I use schemas.

    - by Alexander Kuznetsov
    I use schemas to simplify granting permissions. For tables and views, I have three schemas: Data, the actual data my customers need. Can only be modified via sprocs. Staging, only visible to data loaders and devs. Full privileges on INSERT?UPDATE/DELETE for those who see it. Config, the configuration data used in loads, only visible to data loaders and devs. Can only be modified via sprocs. For sprocs/UDFs I have the following schemas: Readers Writers ETL ConfigReaders ConfigWriters Also I have dbo...(read more)

    Read the article

  • Algorithmia Source Code released on CodePlex

    - by FransBouma
    Following the release of our BCL Extensions Library on CodePlex, we have now released the source-code of Algorithmia on CodePlex! Algorithmia is an algorithm and data-structures library for .NET 3.5 or higher and is one of the pillars LLBLGen Pro v3's designer is built on. The library contains many data-structures and algorithms, and the source-code is well documented and commented, often with links to official descriptions and papers of the algorithms and data-structures implemented. The source-code is shared using Mercurial on CodePlex and is licensed under the friendly BSD2 license. User documentation is not available at the moment but will be added soon. One of the main design goals of Algorithmia was to create a library which contains implementations of well-known algorithms which weren't already implemented in .NET itself. This way, more developers out there can enjoy the results of many years of what the field of Computer Science research has delivered. Some algorithms and datastructures are known in .NET but are re-implemented because the implementation in .NET isn't efficient for many situations or lacks features. An example is the linked list in .NET: it doesn't have an O(1) concat operation, as every node refers to the containing LinkedList object it's stored in. This is bad for algorithms which rely on O(1) concat operations, like the Fibonacci heap implementation in Algorithmia. Algorithmia therefore contains a linked list with an O(1) concat feature. The following functionality is available in Algorithmia: Command, Command management. This system is usable to build a fully undo/redo aware system by building your object graph using command-aware classes. The Command pattern is implemented using a system which allows transparent undo-redo and command grouping so you can use it to make a class undo/redo aware and set properties, use its contents without using commands at all. The Commands namespace is the namespace to start. Classes you'd want to look at are CommandifiedMember, CommandifiedList and KeyedCommandifiedList. See the CommandQueueTests in the test project for examples. Graphs, Graph algorithms. Algorithmia contains a sophisticated graph class hierarchy and algorithms implemented onto them: non-directed and directed graphs, as well as a subgraph view class, which can be used to create a view onto an existing graph class which can be self-maintaining. Algorithms include transitive closure, topological sorting and others. A feature rich depth-first search (DFS) crawler is available so DFS based algorithms can be implemented quickly. All graph classes are undo/redo aware, as they can be set to be 'commandified'. When a graph is 'commandified' it will do its housekeeping through commands, which makes it fully undo-redo aware, so you can remove, add and manipulate the graph and undo/redo the activity automatically without any extra code. If you define the properties of the class you set as the vertex type using CommandifiedMember, you can manipulate the properties of vertices and the graph contents with full undo/redo functionality without any extra code. Heaps. Heaps are data-structures which have the largest or smallest item stored in them always as the 'root'. Extracting the root from the heap makes the heap determine the next in line to be the 'maximum' or 'minimum' (max-heap vs. min-heap, all heaps in Algorithmia can do both). Algorithmia contains various heaps, among them an implementation of the Fibonacci heap, one of the most efficient heap datastructures known today, especially when you want to merge different instances into one. Priority queues. Priority queues are specializations of heaps. Algorithmia contains a couple of them. Sorting. What's an algorithm library without sort algorithms? Algorithmia implements a couple of sort algorithms which sort the data in-place. This aspect is important in situations where you want to sort the elements in a buffer/list/ICollection in-place, so all data stays in the data-structure it already is stored in. PropertyBag. It re-implements Tony Allowatt's original idea in .NET 3.5 specific syntax, which is to have a generic property bag and to be able to build an object in code at runtime which can be bound to a property grid for editing. This is handy for when you have data / settings stored in XML or other format, and want to create an editable form of it without creating many editors. IEditableObject/IDataErrorInfo implementations. It contains default implementations for IEditableObject and IDataErrorInfo (EditableObjectDataContainer for IEditableObject and ErrorContainer for IDataErrorInfo), which make it very easy to implement these interfaces (just a few lines of code) without having to worry about bookkeeping during databinding. They work seamlessly with CommandifiedMember as well, so your undo/redo aware code can use them out of the box. EventThrottler. It contains an event throttler, which can be used to filter out duplicate events in an event stream coming into an observer from an event. This can greatly enhance performance in your UI without needing to do anything other than hooking it up so it's placed between the event source and your real handler. If your UI is flooded with events from data-structures observed by your UI or a middle tier, you can use this class to filter out duplicates to avoid redundant updates to UI elements or to avoid having observers choke on many redundant events. Small, handy stuff. A MultiValueDictionary, which can store multiple unique values per key, instead of one with the default Dictionary, and is also merge-aware so you can merge two into one. A Pair class, to quickly group two elements together. Multiple interfaces for helping with building a de-coupled, observer based system, and some utility extension methods for the defined data-structures. We regularly update the library with new code. If you have ideas for new algorithms or want to share your contribution, feel free to discuss it on the project's Discussions page or send us a pull request. Enjoy!

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Opinion on LastPass's security for the Average Joe [closed]

    - by Rook
    This is borderline on objective/subjective, but I'm posting it here since I'm more interested in objective facts, without going into too much technical details, than I am in user reviews of LastPass. I've always used offline ways for (password / sensitive data) storage, but lately I keep hearing good things about LastPass. Indeed, it is more practical having it always accessible from every computer you're using without syncing and related problems, but the security aspect still troubles me. How (in a nutshell for dummies) does LastPass keep your data secure / can their employees see your data, and what is your opinion for such storage of more than usual keeping of sensitive data (bank PIN codes, some financial / business related stuff and so on - you know, the things that would practically hurt if lost / phished)? What are your opinions of it, and do you trust it for such? Any bad experiences? If someone for example is sniffing your wifi network, would such data be easier than usual to sniff out?

    Read the article

  • Personal search – the future of search

    - by jamiet
    [Four months ago I wrote a meandering blog post on another blogging site entitled Personal search – the future of search. The points I made therein are becoming more relevant to what I'm reading about and hoping to get involved in in the future so I'm re-posting here to a wider audience to hopefully get some more feedback and guage reaction to it. This has been prompted by the book Pull by David Siegel that is forming my current holiday reading (recommended to me by a commenter on my previous post Interesting things – Twitter annotations and your phone as a web server) and in particular by Siegel's notion of us all in the future having a personal online data vault.] My one-time colleague Paul Dawson recently wrote an article called The Future of Search and in it he proposed some interesting ideas. Some choice quotes: The growth of Chinese search giant Baidu is an indicator that fully localised and tailored content and offerings have great traction with local audiences This trend is already driving an increase in the use of specialist searches … Look at how Farecast is now integrated into Bing for example, or how Flightstats is now integrated into Google. Search does not necessarily have to begin with a keyword, but could start instead with a click or a touch. Take a look at Retrievr. Start drawing a picture in the box and see what happens. This is certainly search without the need for typing in keywords search technology has advanced greatly in recent years. The recent launch of Microsoft Live Labs’ Pivot has given us a taste of what we can expect to see in the future This really got me thinking about where search might go in the future and as my mind wandered I realised that as the amount of data that we collect about ourselves increases so too will the need and the desire to search it. The amount of electronic data that exists about each and every person is increasing and in the near future I fully expect that we are going to be able to store personal data such as: A history of our location (in fact Google Latitude already offers this facility) Recordings of all our phone conversations Health information history (weight, blood pressure etc…) Energy usage Spending history What films we watch, what radio stations we listen to Voting history Of course, most of this stuff is already stored somewhere but crucially we don’t have easy access to it. My utilities supplier knows how much electricity I’m using but if I want to know for myself I have to go and dig through my statements (assuming I have kept them). Similarly my doctor probably has ready access to all of my health records, my bank knows exactly what I have spent my money on, my cable supplier knows what I watch on TV and my mobile phone supplier probably knows exactly where I am and where I’ve been for the past few years. Strange then that none of this electronic information is available to me in a way that I can really make use of it; after all, its MY information. Its MY data. I created it. That is set to change. As technologies mature and customers become more technically cognizant they will demand more access to the data that companies hold about them. The companies themselves will realise the benefit that they derive from giving users what they want and will embrace ways of providing it. As a result the amount of data that we store about ourselves is going to increase exponentially and the desire to search and derive value from that data is going to grow with it; we are about to enter the era of the “personal datastore” and we will want, and need, to search through it in order to make sense of it all. Its interesting then that today when we think of search we think of search engines and yet in these personal datastores we’re referring to data that search engines can’t touch because WE own it and we (hopefully) choose to keep it private. Someone, I know not who, is going to lead in this space by making it easy for us to search our data and retrieve information that we have either forgotten or maybe didn’t even know in the first place. We will learn new things about ourselves and about our habits; we will share these findings with whomever we choose; we will compare what we discover with others; we will collaborate for mutual benefit and, most of all, we will educate ourselves as to how to live our lives better. Search will be the means to that end, it will enable us to make sense of the wealth of information that we will collect day in day out. The future of search is personal, why would we be interested in anything else? @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

< Previous Page | 556 557 558 559 560 561 562 563 564 565 566 567  | Next Page >