Search Results

Search found 65101 results on 2605 pages for 'big data'.

Page 648/2605 | < Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >

  • MSSQL 2005 FOR XML

    - by Lima
    Hi, I am wanting to export data from a table to a specifically formated XML file. I am fairly new to XML files, so what I am after may be quite obvious but I just cant find what I am looking for on the net. The format of the XML results I need are: <data> <event start="May 28 2006 09:00:00 GMT" end="Jun 15 2006 09:00:00 GMT" isDuration="true" title="Writing Timeline documentation" image="http://simile.mit.edu/images/csail-logo.gif"> A few days to write some documentation </event> </data> My table structure is: name VARCHAR(50), description VARCHAR(255), startDate DATETIME, endDate DATETIME (I am not too interested in the XML fields image or isDuration at this point in time). I have tried: SELECT [name] ,[description] ,[startDate] ,[endTime] FROM [testing].[dbo].[time_timeline] FOR XML RAW('event'), ROOT('data'), type Which gives me: <data> <event name="Test1" description="Test 1 Description...." startDate="1900-01-01T00:00:00" endTime="1900-01-01T00:00:00" /> <event name="Test2" description="Test 2 Description...." startDate="1900-01-01T00:00:00" endTime="1900-01-01T00:00:00" /> </data> What I am missing, is the description needs to be outside of the event attributes, and there needs to be a tag. Is anyone able to point me in the correct direction, or point me to a tutorial or similar on how to accomplish this? Thanks, Matt

    Read the article

  • C++ How to read a string of text and create object of their class

    - by user1777711
    After reading a text file... The following information is available Point2D, [3, 2] Line3D, [7, 12, 3], [-9, 13, 68] Point3D, [1, 3, 8] Line2D, [5, 7], [3, 8] Point2D, [3, 2] Line3D, [7, -12, 3], [9, 13, 68] Point3D, [6, 9, 5] Point2D, [3, 2] Line3D, [70, -120, -3], [-29, 1, 268] Line3D, [25, -69, -33], [-2, -41, 58] Point3D, [6, 9, -50] The first data separate by the delimiter comma is the class name. for each of the 4 class Point2D,Line3D,Point3D,Line2D How do i like store them into relevant object base on their class means.. when it read first line Point2D, [3, 2] It will store it as Point2D object with the Data [3, 2] But the issue is what dataset should i pick, Vector, Set , Map or List I was thinking to actually create a data set then but i can't use new Point2D(); since Point2D is parent of Point3D Line2D is parent of Line3D and theres no parent class of Point2D and Line2D. how can i like create a object of them in a data set like e.g Vector so Vector[0] is of Point2D class with data [3,2] , then Vector[1] is of Line3D class with data [7, 12, 3], [-9, 13, 68] Thanks for helping.!

    Read the article

  • Calculating distance from latitude, longitude and height using a geocentric co-ordinate system

    - by Sarge
    I've implemented this method in Javascript and I'm roughly 2.5% out and I'd like to understand why. My input data is an array of points represented as latitude, longitude and the height above the WGS84 ellipsoid. These points are taken from data collected from a wrist-mounted GPS device during a marathon race. My algorithm was to convert each point to cartesian geocentric co-ordinates and then compute the Euclidean distance (c.f Pythagoras). Cartesian geocentric is also known as Earth Centred Earth Fixed. i.e. it's an X, Y, Z co-ordinate system which rotates with the earth. My test data was the data from a marathon and so the distance should be very close to 42.26km. However, the distance comes to about 43.4km. I've tried various approaches and nothing changes the result by more than a metre. e.g. I replaced the height data with data from the NASA SRTM mission, I've set the height to zero, etc. Using Google, I found two points in the literature where lat, lon, height had been transformed and my transformation algorithm is matching. What could explain this? Am I expecting too much from Javascript's double representation? (The X, Y, Z numbers are very big but the differences between two points is very small). My alternative is to move to computing the geodesic across the WGS84 ellipsoid using Vincenty's algorithm (or similar) and then calculating the Euclidean distance with the two heights but this seems inaccurate. Thanks in advance for your help!

    Read the article

  • ASP.NET MVC 2 user input to SQL 2008 Database problems

    - by Rob
    After my publish in VS2010 the entire website loads and pulls data from the database perfectly. I can even create new users through the site with the correct key code, given out to who needs access. I have two connection strings in my web.config file The first: <add xdt:Transform="SetAttributes" xdt:Locator="Match(name)" name="EveModelContainer" connectionString="metadata=res://*/Models.EdmModel.EveModel.csdl|res://*/Models.EdmModel.EveModel.ssdl|res://*/Models.EdmModel.EveModel.msl;provider=System.Data.SqlClient;provider connection string=&quot;Data Source=localhost;Initial Catalog=fleet;Persist Security Info=True;User ID=fleet;Password=****&quot;" providerName="System.Data.EntityClient" /> The second: <add xdt:Transform="SetAttributes" xdt:Locator="Match(name)" name="ApplicationServices" connectionString="Data Source=localhost;Initial Catalog=fleet;Persist Security Info=True;User ID=fleet;Password=****;MultipleActiveResultSets=True" /> The first one is the one that is needed to post data with the main application, EveModelContainer. Everything else is pulled using the standard ApplicationServices connection. Do you see anything wrong with my connectionstring? I'm at a complete loss here. The site works perfectly on my friends server and not on mine... Could it be a provider issue? And if I go to iis 7's manager console, and click .net users I get a pop up message saying the custom provider isn't a trusted provider do I want to allow it to run at a higher trust level. I'm at the point where I think its either my string or this trusted provider error... but I have no clue how to add to the trusted provider list... Thank you in advance!!!

    Read the article

  • Inorder tree traversal in binary tree in C

    - by srk
    In the below code, I'am creating a binary tree using insert function and trying to display the inserted elements using inorder function which follows the logic of In-order traversal.When I run it, numbers are getting inserted but when I try the inorder function( input 3), the program continues for next input without displaying anything. I guess there might be a logical error.Please help me clear it. Thanks in advance... #include<stdio.h> #include<stdlib.h> int i; typedef struct ll { int data; struct ll *left; struct ll *right; } node; node *root1=NULL; // the root node void insert(node *root,int n) { if(root==NULL) //for the first(root) node { root=(node *)malloc(sizeof(node)); root->data=n; root->right=NULL; root->left=NULL; } else { if(n<(root->data)) { root->left=(node *)malloc(sizeof(node)); insert(root->left,n); } else if(n>(root->data)) { root->right=(node *)malloc(sizeof(node)); insert(root->right,n); } else { root->data=n; } } } void inorder(node *root) { if(root!=NULL) { inorder(root->left); printf("%d ",root->data); inorder(root->right); } } main() { int n,choice=1; while(choice!=0) { printf("Enter choice--- 1 for insert, 3 for inorder and 0 for exit\n"); scanf("%d",&choice); switch(choice) { case 1: printf("Enter number to be inserted\n"); scanf("%d",&n); insert(root1,n); break; case 3: inorder(root1); break; default: break; } } }

    Read the article

  • Native Endians and Auto Conversion

    - by KnickerKicker
    so the following converts big endians to little ones uint32_t ntoh32(uint32_t v) { return (v << 24) | ((v & 0x0000ff00) << 8) | ((v & 0x00ff0000) >> 8) | (v >> 24); } works. like a charm. I read 4 bytes from a big endian file into char v[4] and pass it into the above function as ntoh32 (* reinterpret_cast<uint32_t *> (v)) that doesn't work - because my compiler (VS 2005) automatically converts the big endian char[4] into a little endian uint32_t when I do the cast. AFAIK, this automatic conversion will not be portable, so I use uint32_t ntoh_4b(char v[]) { uint32_t a = 0; a |= (unsigned char)v[0]; a <<= 8; a |= (unsigned char)v[1]; a <<= 8; a |= (unsigned char)v[2]; a <<= 8; a |= (unsigned char)v[3]; return a; } yes the (unsigned char) is necessary. yes it is dog slow. there must be a better way. anyone ?

    Read the article

  • Titanium TableViewRow classname with custom rows

    - by pancake
    I would like to know in what way the 'className' property of a Ti.UI.TableViewRow helps when creating custom rows. For example, I populate a tableview with custom rows in the following way: function populateTableView(tableView, data) { var rows = []; var row; var title, image; var i; for (i = 0; i < data.length; i++) { title = Ti.UI.createLabel({ text : data[i].title, width : 100, height: 30, top: 5, left: 25 }); image = Ti.UI.createImage({ image : 'some_image.png', width: 30, height: 30, top: 5, left: 5 }); /* and, like, 5+ more views or whatever */ row = Ti.UI.createTableViewRow(); row.add(titleLabel); row.add(image); rows.push(row); } tableView.setData(rows); } Of course, this example of a "custom" row is easily created using the standard title and image properties of the TableViewRow, but that isn't the point. How is the allocation of new labels, image views and other child views of a table view prevented in favour of their reuse? I know in iOS this is achieved by using the method -[UITableView dequeueReusableCellWithIdentifier:] to fetch a row object from a 'reservoir' (so 'className' is 'identifier' here) that isn't currently being used for displaying data, but already has the needed child views laid out correctly in it, thus only requiring to update the data contained within (text, image data, etc). As this system is so unbelievably simple, I have a lot of trouble believing the method employed by the Titanium API does not support this. After reading through the API and searching the web, I do however suspect this is the case. The 'className' property is recommended as an easy way to make table views more efficient in Titanium, but its relation to custom table view rows is not explained in any way. If anyone could clarify this matter for me, I would be very grateful.

    Read the article

  • Using an element against an entire list in Haskell

    - by Snick
    I have an assignment and am currently caught in one section of what I'm trying to do. Without going in to specific detail here is the basic layout: I'm given a data element, f, that holds four different types inside (each with their own purpose): data F = F Float Int, Int a function: func :: F -> F-> Q Which takes two data elements and (by simple calculations) returns a type that is now an updated version of one of the types in the first f. I now have an entire list of these elements and need to run the given function using one data element and return the type's value (not the data element). My first analysis was to use a foldl function: myfunc :: F -> [F] -> Q myfunc y [] = func y y -- func deals with the same data element calls myfunc y (x:xs) = foldl func y (x:xs) however I keep getting the same error: "Couldn't match expected type 'F' against inferred type 'Q'. In the first argument of 'foldl', namely 'myfunc' In the expression: foldl func y (x:xs) I apologise for such an abstract analysis on my problem but could anyone give me an idea as to what I should do? Should I even use a fold function or is there recursion I'm not thinking about?

    Read the article

  • libcurl (c api) READFUNCTION for http PUT blocking forever

    - by Duane
    I am using libcurl for a RESTful library. I am having two problems with a PUT message, I am just trying to send a small content like "hello" via put. My READFUNCTION for PUT's blocks for a very large amount of time (minutes) when I follow the manual at curl.haxx.se and return a 0 indicating I have finished the content. (on os X) When I return something 0 this succeeds much faster (< 1 sec) When I run this on my linux machine (ubuntu 10.4) this blocking event appears to NEVER return when I return 0, if I change the behavior to return the size written libcurl appends all the data in the http body sending way more data and it fails with a "too much data" message from the server. my readfunction is below, any help would be greatly appreciated. I am using libcurl 7.20.1 typedef struct{ void *data; int body_size; int bytes_remaining; int bytes_written; } postdata; size_t readfunc(void *ptr, size_t size, size_t nmemb, void *stream) { if(stream) { postdata ud = (postdata)stream; if(ud->bytes_remaining) { if(ud->body_size > size*nmemb) { memcpy(ptr, ud->data+ud->bytes_written, size*nmemb); ud->bytes_written+=size+nmemb; ud->bytes_remaining = ud->body_size-size*nmemb; return size*nmemb; } else { memcpy(ptr, ud->data+ud->bytes_written, ud->bytes_remaining); ud->bytes_remaining=0; return 0; } }

    Read the article

  • CodeIgniter: problem using foreach in view

    - by krike
    I have a model and controller who gets some data from my database and returns the following array Array ( [2010] => Array ( [year] => 2010 [months] => Array ( [0] => stdClass Object ( [sales] => 2 [month] => Apr ) [1] => stdClass Object ( [sales] => 1 [month] => Nov ) ) ) [2011] => Array ( [year] => 2011 [months] => Array ( [0] => stdClass Object ( [sales] => 1 [month] => Nov ) ) ) ) It shows exactly what it should show but the key's have different names so I have no idea on how to loop through the years using foreach in my view. Arrays is something I'm not that good at yet :( this is the controller if you need to know: function analytics() { $this->load->model('admin_model'); $analytics = $this->admin_model->Analytics(); foreach ($analytics as $a): $data[$a->year]['year'] = $a->year; $data[$a->year]['months'] = $this->admin_model->AnalyticsMonth($a->year); endforeach; echo"<pre style='text-align:left;'>"; print_r($data); echo"</pre>"; $data['main_content'] = 'analytics'; $this->load->view('template_admin', $data); }//end of function categories()

    Read the article

  • jQuery: bind generated elements

    - by superUntitled
    Hello, thank you for taking time to look at this. I am trying to code my very first jQuery plugin and have run into my first problem. The plugin is called like this <div id="kneel"></div> <script type="text/javascript"> $("#kneel").zod(1, { }); </script> It takes the first option (integer), and returns html content that is dynamically generated by php. The content that is generated needs to be bound by a variety of functions (such as disabling form buttons and click events that return ajax data. The plugin looks like this (I have included the whole plugin in case that matters)... (function( $ ){ $.fn.zod = function(id, options ) { var settings = { 'next_button_text': 'Next', 'submit_button_text': 'Submit' }; return this.each(function() { if ( options ) { $.extend( settings, options ); } // variables var obj = $(this); /* these functions contain html elements that are generated by the get() function below */ // disable some buttons $('div.bario').children('.button').attr('disabled', true); // once an option is selected, enable the button $('input[type="radio"]').live('click', function(e) { $('div.bario').children('.button').attr('disabled', false); }) // when a button is clicked, return some data $('.button').bind('click', function(e) { e.preventDefault(); $.getJSON('/returnSomeData.php, function(data) { $('.text').html('<p>Hello: ' + data + '</p>'); }); // generate content, this is the content that needs binding... $.get('http://example.com/script.php?id='+id, function(data) { $(obj).html(data); }); }); }; })( jQuery ); The problem I am having is that the functions created to run on generated content are not binding to the generated content. How do I bind the content created by the get() function?

    Read the article

  • Performance improvement of client server system

    - by Tanuj
    I have a legacy client server system where the server maintains a record of some data stored in a sqlite database. The data is related to monitoring access patterns of files stored on the server. The client application is basically a remote viewer of the data. When the client is launched, it connects to the server and gets the data from the server to display in a grid view. The data gets updated in real time on the server and the view in the client automatically gets refreshed. There are two problems with the current implementation: When the database gets too big, it takes a lot of time to load the client. What are the best ways to deal with this. One option is to maintain a cache at the client side. How to best implement a cache ? How can the server maintain a diff so that it only sends the diff during the refresh cycle. There can be multiple clients and each client needs to display the latest data available on the server. The server is a windows service daemon. Both the client and the server are implemented in C#

    Read the article

  • Google App Engine datastore encoding?

    - by sernaferna
    I'm using the GAE datastore for a Java application, and storing some text that will be in numerous languages. In my servlet, I'm first checking to see if there's any data in the data store, and, if not, I'm creating some, similar to the following: ArrayList<Lang> list = new ArrayList<Lang>(); list.add(new Lang("EN", "English", 1)); list.add(new Lang("ES", "Español", 0)); //more languages here... PersistenceManager pm = PMF.get().getPersistenceManager(); for(Lang l : list) { pm.makePersistent(l); } Since this is using JDO, I guess I should include the relevent parts of the Lang class too: @PersistenceCapable public class Lang { @PrimaryKey private String code; @Persistent private String name; @Persistent private int popularity; // getters & setters & constructors... } However, the non-ASCII characters are giving me grief. I've set my Eclipse project to use the UTF-8 encoding instead of the default Cp1252, so I think I'm okay from that perspective, but when I use the App Engine Data Viewer to look at my data, that Español entry becomes Español, and when I click on it to view it, I get a 500 Server Error. (There are some other entries with right-to-left text that don't even show up in the Data Viewer at all, but one problem at a time...) Is there anything special I can do in my code to set the character encoding, or specify to GAE that the data I'm storing is UTF-8? Or is the problem on the Eclipse side, and is there something I should be doing with my Java code?

    Read the article

  • How to map a test onto a list of numbers

    - by Arthur Ulfeldt
    I have a function with a bug: user> (-> 42 int-to-bytes bytes-to-int) 42 user> (-> 128 int-to-bytes bytes-to-int) -128 user> looks like I need to handle overflow when converting back... Better write a test to make sure this never happens again. This project is using clojure.contrib.test-is so i write: (deftest int-to-bytes-to-int (let [lots-of-big-numbers (big-test-numbers)] (map #(is (= (-> % int-to-bytes bytes-to-int) %)) lots-of-big-numbers))) This should be testing converting to a seq of bytes and back again produces the origional result on a list of 10000 random numbers. Looks OK in theory? except none of the tests ever run. Testing com.cryptovide.miscTest Ran 23 tests containing 34 assertions. 0 failures, 0 errors. why don't the tests run? what can I do to make them run?

    Read the article

  • Why does using the Asynchronous Programming Model in .Net not lead to StackOverflow exceptions?

    - by uriDium
    For example, we call BeginReceive and have the callback method that BeginReceive executes when it has completed. If that callback method once again calls BeginReceive in my mind it would be very similar to recursion. How is that this does not cause a stackoverflow exception. Example code from MSDN: private static void Receive(Socket client) { try { // Create the state object. StateObject state = new StateObject(); state.workSocket = client; // Begin receiving the data from the remote device. client.BeginReceive( state.buffer, 0, StateObject.BufferSize, 0, new AsyncCallback(ReceiveCallback), state); } catch (Exception e) { Console.WriteLine(e.ToString()); } } private static void ReceiveCallback( IAsyncResult ar ) { try { // Retrieve the state object and the client socket // from the asynchronous state object. StateObject state = (StateObject) ar.AsyncState; Socket client = state.workSocket; // Read data from the remote device. int bytesRead = client.EndReceive(ar); if (bytesRead > 0) { // There might be more data, so store the data received so far. state.sb.Append(Encoding.ASCII.GetString(state.buffer,0,bytesRead)); // Get the rest of the data. client.BeginReceive(state.buffer,0,StateObject.BufferSize,0, new AsyncCallback(ReceiveCallback), state); } else { // All the data has arrived; put it in response. if (state.sb.Length > 1) { response = state.sb.ToString(); } // Signal that all bytes have been received. receiveDone.Set(); } } catch (Exception e) { Console.WriteLine(e.ToString()); } }

    Read the article

  • Java: fastest way to do random reads on huge disk file(s)

    - by cocotwo
    I've got a moderately big set of data, about 800 MB or so, that is basically some big precomputed table that I need to speed some computation by several orders of magnitude (creating that file took several mutlicores computers days to produce using an optimized and multi-threaded algo... I do really need that file). Now that it has been computed once, that 800MB of data is read only. I cannot hold it in memory. As of now it is one big huge 800MB file but splitting in into smaller files ain't a problem if it can help. I need to read about 32 bits of data here and there in that file a lot of time. I don't know before hand where I'll need to read these data: the reads are uniformly distributed. What would be the fastest way in Java to do my random reads in such a file or files? Ideally I should be doing these reads from several unrelated threads (but I could queue the reads in a single thread if needed). Is Java NIO the way to go? I'm not familiar with 'memory mapped file': I think I don't want to map the 800 MB in memory. All I want is the fastest random reads I can get to access these 800MB of disk-based data. btw in case people wonder this is not at all the same as the question I asked not long ago: http://stackoverflow.com/questions/2346722/java-fast-disk-based-hash-set

    Read the article

  • Should TcpClient be used for this scenario?

    - by Martín Marconcini
    I have to communicate with an iPhone. I have its IP Address and the port (obtained via Bonjour). I need to send a header that is “0x50544833” (or similar, It’s an HEX number), then the size of the data (below) and then the data itself. The data is just a string that looks like this: <?xml version="1.0" encoding="utf-8"?> <!DOCTYPE plist SYSTEM "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <dict> <key>clientName</key> <string>XXX</string> <key>clientService</key> <string>0be397e7-21f4-4d3c-89d0-cdf179a7e14d</string> <key>registerCode</key> <string>0000</string> </dict> </plist> The requirement also says that I must send the data in little endian format (which I think is the default for Intel anyway). So it would be: hex_number + size of data + string_with_the_above_xml. I need to send that to the iPhone and read the response. What would be, according to your experience, the best way to send this data (and read the response)?

    Read the article

  • Google Bar Chart Time Label Interval

    - by Alex Angelini
    Hi I am using Google Bar Chart through the visualization API on my site: http://code.google.com/apis/visualization/documentation/gallery/imagebarchart.html And my x-axis is time, and every time interval is a time of day in the format (HH:MM) Here is my code for the graph: <script type="text/javascript" src="https://www.google.com/jsapi"></script> <script type="text/javascript"> %s google.load("visualization", "1", {packages:["imagebarchart"]}); google.setOnLoadCallback(drawChart); function drawChart() { var data = new google.visualization.DataTable(); data.addColumn('string', 'Date'); data.addColumn('number', 'Amount'); data.addRows({{Rows}}); %s var chart = new google.visualization.ImageBarChart(document.getElementById('chart_div')); chart.draw(data, {width: 900, height: 340, min: 0, isVertical:true, legend:'none'}); } </script> The rows of data are added later using a templating engine, but that is the jist of my chart. Because I have added my time variable as 'string' I cannot use the valueLabelsInterval option to only show every 4 labels. Because I can't do that the labels overlap and look ugly. Is there any other way to only show every other time label or every 4th one, I read through the docs but could not see how to do it. Any help would be greatly appreciated thanks

    Read the article

  • useer degined Copy ctor, and copy-ctors further down the chain - compiler bug ? programers brainbug

    - by J.Colmsee
    Hi. i have a little problem, and I am not sure if it's a compiler bug, or stupidity on my side. I have this struct : struct BulletFXData { int time_next_fx_counter; int next_fx_steps; Particle particles[2];//this is the interesting one ParticleManager::ParticleId particle_id[2]; }; The member "Particle particles[2]" has a self-made kind of smart-ptr in it (resource-counted texture-class). this smart-pointer has a default constructor, that initializes to the ptr to 0 (but that is not important) I also have another struct, containing the BulletFXData struct : struct BulletFX { BulletFXData data; BulletFXRenderFunPtr render_fun_ptr; BulletFXUpdateFunPtr update_fun_ptr; BulletFXExplosionFunPtr explode_fun_ptr; BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr; BulletFX( BulletFXData data, BulletFXRenderFunPtr render_fun_ptr, BulletFXUpdateFunPtr update_fun_ptr, BulletFXExplosionFunPtr explode_fun_ptr, BulletFXLifetimeOverFunPtr lifetime_over_fun_ptr) :data(data), render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } /* //USER DEFINED copy-ctor. if it's defined things go crazy BulletFX(const BulletFX& rhs) :data(data),//this line of code seems to do a plain memory-copy without calling the right ctors render_fun_ptr(render_fun_ptr), update_fun_ptr(update_fun_ptr), explode_fun_ptr(explode_fun_ptr), lifetime_over_fun_ptr(lifetime_over_fun_ptr) { } */ }; If i use the user-defined copy-ctor my smart-pointer class goes crazy, and it seems that calling the CopyCtor / assignment operator aren't called as they should. So - does this all make sense ? it seems as if my own copy-ctor of struct BulletFX should do exactly what the compiler-generated would, but it seems to forget to call the right constructors down the chain. compiler bug ? me being stupid ? Sorry about the big code, some small example could have illustrated too. but often you guys ask for the real code, so well - here it is :D

    Read the article

  • Problems with Backbone.Model callback and THIS

    - by Rev. Samuel
    I'm building a simple weather widget. The current weather conditions are read out of the National Weather Service xml file and then I want to parse and store the relevant data in the model but the callback for the $.ajax won't connect (the way I'm doing it). var Weather = Backbone.Model.extend({ initialize: function(){ _.bindAll( this, 'update', 'startLoop', 'stopLoop' ); this.startLoop(); }, startLoop: function(){ this.update(); this.interval = window.setInterval( _.bind( this.update, this ), 1000 * 60 * 60 ); }, stopLoop: function(){ this.interval = window.clearInterval( this.interval ); }, store: function( data ){ this.set({ icon : $( data ).find( 'icon_url_name' ).text() }); }, update: function(){ $.ajax({ type: 'GET', url: 'xml/KROC.xml', datatype: 'xml' }) .done( function( data ) { var that = this; that.store( $( data ).find( 'current_observation' )[ 0 ] ); }); } }); var weather = new Weather(); The data is read correctly but I can't get the done function of the call back to call the store function. (I would be happy if the "done" would just parse and then do "this.set". Thanks in advance for your help.

    Read the article

  • what is the best way to optimize my json on an asp.net-mvc site

    - by ooo
    i am currently using jqgrid on an asp.net mvc site and we have a pretty slow network (internal application) and it seems to be taking the grid a long time to load (the issue is both network as well as parsing, rendering) I am trying to determine how to minimized what i send over to the client to make it as fast as possible. Here is a simplified view of my controller action to load data into the grid: [AcceptVerbs(HttpVerbs.Get)] public ActionResult GridData1(GridData args) { var paginatedData = applications.GridPaginate(args.page ?? 1, args.rows ?? 10, i => new { i.Id, Name = "<div class='showDescription' id= '" + i.id+ "'>" + i.Name + "</div>", MyValue = GetImageUrl(_map, i.value, "star"), ExternalId = string.Format("<a href=\"{0}\" target=\"_blank\">{1}</a>", Url.Action("Link", "Order", new { id = i.id }), i.Id), i.Target, i.Owner, EndDate = i.EndDate, Updated = "<div class='showView' aitId= '" + i.AitId + "'>" + GetImage(i.EndDateColumn, "star") + "</div>", }) return Json(paginatedData); } So i am building up a json data (i have about 200 records of the above) and sending it back to the GUI to put in the jqgrid. The one thing i can thihk of is Repeated data. In some of the json fields i am appending HTML on top of the raw "data". This is the same HTML on every record. It seems like it would be more efficient if i could just send the data and "append" the HTML around it on the client side. Is this possible? Then i would just be sending the actual data over the wire and have the client side add on the rest of the HTML tags (the divs, etc) be put together. Also, if there are any other suggestions on how i can minimize the size of my messages, that would be great. I guess at some point these solution will increase the client side load but it may be worth it to cut down on network traffic.

    Read the article

  • Laravel - Mail class Exception

    - by Christian Giupponi
    I need to send email within my app and this is my code: if( $agent->save() ) { //Preparo la mail da inviare con i dati di login $data = [ 'nome' => $input['nome'], 'cognome' => $input['cognome'], 'email' => $input['email'], 'password' => $input['password'] ]; //ATTENZIONE //Questo è da rimuovere in produzione, finge di inviare la mail Mail::pretend(); //Recuero il template e passo alla funzione i dati Mail::send('emails.agents.registration', $data, function($message) use ($data) { $message->to( $data['email'], $data['nome'].' '.$data['cognome'] )->subject('Benvenuto!'); }); return Redirect::action('admin.agents.index')->with('positive_flash_message', 'Agente inserito correttamente.'); } As you can see I have use the Mail::pretend to avoid the email send in development, the problem is that I get this error every time I try to send an email: Undefined property: Illuminate\Mail\Message::$email (View: /var/www/progetti/app/views/emails/agents/registration.blade.php) nd this is my blade view: Email: {{ $message->email }} Password: {{ $message->password }} What's wrong with $message?

    Read the article

  • Should I be using Expression Blend to design really dynamic UIs?

    - by Robert Rossney
    My company's product is, at its core, a framework for developing metadata-driven UIs. I don't know how to characterize it less succinctly than that, and hope I won't need to for purposes of this question, but we'll see. I've been trying to come up to speed on WPF, and have been building UI prototypes here and there, and recently I decided to see if I could use Expression Blend to help with the design of these UIs. And I'm pretty mystified at this point. It appears to me as though Expresssion Blend is designed with the expectation that you already know all of the objects that are going to be present in the UI at design time. But our program generates these object dynamically at runtime. For instance, a data row might be presented in a horizontal StackPanel containing alternating TextBlocks (for captions) and TextBoxes (for data fields). The number of these objects depends on metadata about the number of columns in the data row. I can, pretty readily, write code that runs through a metadata record and populates a StackPanel dynamically, setting up the binding of all of the controls to properties in either the data or metadata. (A TextBox's Width might be bound to metadata, while its Text is bound to data.) But I can't even begin to figure out how to do something like this in Expression Blend. I can manually create all these controls, so that I have a set of controls that I can apply styles to and work out the visual design of the app, but it's really a pain to do this. I can write code that goes through my data model and emits XAML for all these controls, I suppose, and then copy and paste it. But I'm going to feel really stupid if it turns out there a way to do this sort of thing in Expression Blend and I've dropped back and punted because I'm too dim to figure out the right way to think of it. Is this enough information for someone to try formulating an answer?

    Read the article

  • Convert a image to a monochrome byte array

    - by Scott Chamberlain
    I am writing a library to interface C# with the EPL2 printer language. One feature I would like to try to implement is printing images, the specification doc says p1 = Width of graphic Width of graphic in bytes. Eight (8) dots = one (1) byte of data. p2 = Length of graphic Length of graphic in dots (or print lines) Data = Raw binary data without graphic file formatting. Data must be in bytes. Multiply the width in bytes (p1) by the number of print lines (p2) for the total amount of graphic data. The printer automatically calculates the exact size of the data block based upon this formula. I plan on my source image being a 1 bit per pixel bmp file, already scaled to size. I just don't know how to get it from that format in to a byte[] for me to send off to the printer. I tried ImageConverter.ConvertTo(Object, Type) it succeeds but the array it outputs is not the correct size and the documentation is very lacking on how the output is formatted. My current test code. Bitmap i = (Bitmap)Bitmap.FromFile("test.bmp"); ImageConverter ic = new ImageConverter(); byte[] b = (byte[])ic.ConvertTo(i, typeof(byte[])); Any help is greatly appreciated even if it is in a totally different direction.

    Read the article

  • How to refactor this duplicated LINQ code?

    - by benrick
    I am trying to figure out how to refactor this LINQ code nicely. This code and other similar code repeats within the same file as well as in other files. Sometime the data being manipulated is identical and sometimes the data changes and the logic remains the same. Here is an example of duplicated logic operating on different fields of different objects. public IEnumerable<FooDataItem> GetDataItemsByColor(IEnumerable<BarDto> dtos) { double totalNumber = dtos.Where(x => x.Color != null).Sum(p => p.Number); return from stat in dtos where stat.Color != null group stat by stat.Color into gr orderby gr.Sum(p => p.Number) descending select new FooDataItem { Color = gr.Key, NumberTotal = gr.Sum(p => p.Number), NumberPercentage = gr.Sum(p => p.Number) / totalNumber }; } public IEnumerable<FooDataItem> GetDataItemsByName(IEnumerable<BarDto> dtos) { double totalData = dtos.Where(x => x.Name != null).Sum(v => v.Data); return from stat in dtos where stat.Name != null group stat by stat.Name into gr orderby gr.Sum(v => v.Data) descending select new FooDataItem { Name = gr.Key, DataTotal = gr.Sum(v => v.Data), DataPercentage = gr.Sum(v => v.Data) / totalData }; } Anyone have a good way of refactoring this?

    Read the article

< Previous Page | 644 645 646 647 648 649 650 651 652 653 654 655  | Next Page >