Search Results

Search found 91220 results on 3649 pages for 'data type equivalent'.

Page 218/3649 | < Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >

  • jQuery.post not working when using data type json

    - by swift
    I have been trying to utilize json in this jQuery.post because I need to return two values from my executed php. The code was working when I was not implementing json. I need to see if a promo code entered is valid for a particular broker. The two variables I need back are the instant message whether or not it's valid (this is displayed to the user) and I need to update a hidden field that will be used later while updating the database. The jQuery.post does not seem to be firing at all, but the code directly above it (the ajax-loader.gif) is working. I did re-write the whole thing at one point using jQuery.ajax, and had issues there too. Granted, I have probably been looking at this too long and have tried to re-write too many times, but any help is greatly appreciated!! Here's the jQuery.post <!-- Below Script is for Checking Promo Code Against Database--> <script type="text/javascript"> jQuery(document).ready(function() { jQuery("#promocode").keyup(function (e) { //removes spaces from PromoCode jQuery(this).val(jQuery(this).val().replace(/\s/g, '')); var promocode = jQuery(this).val(); var brokerdealerid = document.getElementById("BrokerDealerId").value; if(promocode.length > 0 ){ jQuery("#promo-result").html('<img src="../imgs/ajax-loader.gif" />'); jQuery.post( '../check_promocode.php', {promocode:promocode, brokerdealerid:brokerdealerid}, function(data) { $("#promo-result").html(data.promoresult); $("#promo-result-valid").html(data.promovalid); }, "json"); } }); }); </script> <!-- End Script is for Checking Promo Code Against Database--> Here's relevant code from check_promocode.php: //sanitize incoming parameters if (isset($_POST['brokerdealerid'])) $brokerdealerid = sanitizeMySQL($_POST['brokerdealerid']); $promocode = sanitizeMySQL($promocode); //check promocode in db $results = mysql_query("SELECT PromotionCodeIdentifier FROM PromotionCode WHERE PromotionCodeIdentifier='$promocode' AND BrokerDealerId='$brokerdealerid' AND PromotionCodStrtDte <= CURDATE() AND PromotionCodExpDte >= CURDATE()"); //return total count $PromoCode_exist = mysql_num_rows($results); //total records //if value is more than 0, promocode is valid if($PromoCode_exist) { echo json_encode(array("promoresult"=>"Promotion Code Valid", "promovalid"=>"Y")); exit(); }else{ echo json_encode(array("promoresult"=>"Invalid Promotion Code", "promovalid"=>"N")); exit(); }

    Read the article

  • Java SortedMap to Scala TreeMap

    - by Dave
    I'm having trouble converting a java SortedMap into a scala TreeMap. The SortedMap comes from deserialization and needs to be converted into a scala structure before being used. Some background, for the curious, is that the serialized structure is written through XStream and on desializing I register a converter that says anything that can be assigned to SortedMap[Comparable[_],_] should be given to me. So my convert method gets called and is given an Object that I can safely cast because I know it's of type SortedMap[Comparable[_],_]. That's where it gets interesting. Here's some sample code that might help explain it. // a conversion from comparable to ordering scala> implicit def comparable2ordering[A <: Comparable[A]](x: A): Ordering[A] = new Ordering[A] { | def compare(x: A, y: A) = x.compareTo(y) | } comparable2ordering: [A <: java.lang.Comparable[A]](x: A)Ordering[A] // jm is how I see the map in the converter. Just as an object. I know the key // is of type Comparable[_] scala> val jm : Object = new java.util.TreeMap[Comparable[_], String]() jm: java.lang.Object = {} // It's safe to cast as the converter only gets called for SortedMap[Comparable[_],_] scala> val b = jm.asInstanceOf[java.util.SortedMap[Comparable[_],_]] b: java.util.SortedMap[java.lang.Comparable[_], _] = {} // Now I want to convert this to a tree map scala> collection.immutable.TreeMap() ++ (for(k <- b.keySet) yield { (k, b.get(k)) }) <console>:15: error: diverging implicit expansion for type Ordering[A] starting with method Tuple9 in object Ordering collection.immutable.TreeMap() ++ (for(k <- b.keySet) yield { (k, b.get(k)) })

    Read the article

  • d:DesignData issue, Visual Studio 2010 cant build after adding sample design data with Expression Bl

    - by Valko
    Hi, VS 2010 solution and Silverlight project builds fine, then: I open MyView.xaml view in Expression Blend 4 Add sample data from class (I use my class defined in the same project) after I add new sample design data with Expression blend 4, everything looks fine, you see the added sample data in the EB 4 fine, you also see the data in VS 2010 designer too. Close the EB 4, and next VS 2010 build is giving me this errors: Error 7 XAML Namespace http://schemas.microsoft.com/expression/blend/2008 is not resolved. C:\Code\source\...myview.xaml and: Error 12 Object reference not set to an instance of an object. ... TestSampleData.xaml when I open the TestSampleData.xaml I see that namespace for my class used to define sample data is not recognized. However this namespace and the class itself exist in the same project! If I remove the design data from the MyView.xaml: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" it builds fine and the namespace in TestSampleData.xaml is recognized this time?? and then if add: d:DataContext="{d:DesignData /SampleData/TestSampleData.xaml}" I again see in the VS 2010 designer sample data, but the next build fails and again I see studio cant find the namespace in my TestSampleData.xaml containing sample data. That cycle is driving me crazy. Am I missing something here, is it not possible to have your class defining sample design data in the same project you have the MyView.xaml view?? cheers Valko

    Read the article

  • SQL Server Clustered Index: (Physical) Data Page Order

    - by scherand
    I am struggling understanding what a clustered index in SQL Server 2005 is. I read the MSDN article Clustered Index Structures (among other things) but I am still unsure if I understand it correctly. The (main) question is: what happens if I insert a row (with a "low" key) into a table with a clustered index? The above mentioned MSDN article states: The pages in the data chain and the rows in them are ordered on the value of the clustered index key. And Using Clustered Indexes for example states: For example, if a record is added to the table that is close to the beginning of the sequentially ordered list, any records in the table after that record will need to shift to allow the record to be inserted. Does this mean that if I insert a row with a very "low" key into a table that already contains a gazillion rows literally all rows are physically shifted on disk? I cannot believe that. This would take ages, no? Or is it rather (as I suspect) that there are two scenarios depending on how "full" the first data page is. A) If the page has enough free space to accommodate the record it is placed into the existing data page and data might be (physically) reordered within that page. B) If the page does not have enough free space for the record a new data page would be created (anywhere on the disk!) and "linked" to the front of the leaf level of the B-Tree? This would then mean the "physical order" of the data is restricted to the "page level" (i.e. within a data page) but not to the pages residing on consecutive blocks on the physical hard drive. The data pages are then just linked together in the correct order. Or formulated in an alternative way: if SQL Server needs to read the first N rows of a table that has a clustered index it can read data pages sequentially (following the links) but these pages are not (necessarily) block wise in sequence on disk (so the disk head has to move "randomly"). How close am I? :)

    Read the article

  • storing/retrieving data for graph with long continuous stretches

    - by james
    i have a large 2-dimensional data set which i would like to graph. the graph is displayed in a browser and the data is retrieved via ajax. long stretches of this graph will be continuous - e.g., for x=0 through x=1000, y=9, then for x=1001 through x=1100, y=80, etc. the approach i'm considering is to send (from the server) and store (in the browser) only the points where the data changes. so for the example above, i would say data[0] = 9, then data[1001] = 80. then given x=999 for example, retrieving data[999] would actually look up data[0]. the problem that arises is finding a dictionary-like data structure which behaves like this. the approach i'm considering is to store the data in a traditional dictionary object, then also maintain a sorted array of key for that object. when given x=999, it would look at the mid-point of this array, determine whether the nearest lower key is left or right of that midpoint, then repeat with the correct subsection, etc.. does anyone have thoughts on this problem/approach?

    Read the article

  • Dojox Datagrid contains data, but shows up as empty

    - by Vivek
    I'd really appreciate any help on this. There is this Dojox Datagrid that I'm creating programatically and supplying JSON data. As of now, I'm creating this data within JavaScript itself. Please refer to the below code sample. var upgradeStageStructure =[{ cells:[ { field: "stage", name: "Stage", width: "50%", styles: 'text-align: left;' }, { field:"status", name: "Status", width: "50%", styles: 'text-align: left;' } ] }]; var upgradeStageData = [ {id:1, stage: "Preparation", status: "Complete"}, {id:2, stage: "Formatting", status: "Complete"}, {id:3, stage: "OS Installation", status: "Complete"}, {id:4, stage: "OS Post-Installation", status: "In Progress"}, {id:5, stage: "Application Installation", status: "Not Started"}, {id:6, stage: "Application Post-Installation", status: "Not Started"} ]; var stagestore = new dojo.data.ItemFileReadStore({data:{identifier:"id", items: upgradeStageData}}); var upgradeStatusGrid = new dojox.grid.DataGrid({ autoHeight: true, style: "width:400px;padding:0em;margin:0em;", store: stagestore, clientSort: false, rowSelector: '20px', structure: upgradeStageStructure, columnReordering: false, selectable: false, singleClickEdit: false, selectionMode: 'none', loadingMessage: 'Loading Upgrade Stages', noDataMessage:'There is no data', errorMessage: 'Failed to load Upgrade Status' }); dojo.byId('progressIndicator').innerHTML=''; dojo.byId('progressIndicator').appendChild(upgradeStatusGrid.domNode); upgradeStatusGrid.startup(); The problem is that I am not seeing anything within the grid upon display (no headers, no data). But I know for sure that the data in the grid does exist and the grid is properly initialized, because I called alert (grid.domNode.innerHTML);. The resultant HTML that is thrown up does show a table containing header rows and the above data. This link contains an image which illustrates what I'm seeing when I display the page. (Can't post images since my account is new here) As you may notice, there are 6 rows for 6 pieces of data I have created but the grid is a mess. Please help out if you think you know what could be going wrong. Thanks in advance, Viv

    Read the article

  • rpy2: Converting a data.frame to a numpy array

    - by Mike Dewar
    I have a data.frame in R. It contains a lot of data : gene expression levels from many (125) arrays. I'd like the data in Python, due mostly to my incompetence in R and the fact that this was supposed to be a 30 minute job. I would like the following code to work. To understand this code, know that the variable path contains the full path to my data set which, when loaded, gives me a variable called immgen. Know that immgen is an object (a Bioconductor ExpressionSet object) and that exprs(immgen) returns a data frame with 125 columns (experiments) and tens of thousands of rows (named genes). robjects.r("load('%s')"%path) # loads immgen e = robjects.r['data.frame']("exprs(immgen)") expression_data = np.array(e) This code runs, but expression_data is simply array([[1]]). I'm pretty sure that e doesn't represent the data frame generated by exprs() due to things like: In [40]: e._get_ncol() Out[40]: 1 In [41]: e._get_nrow() Out[41]: 1 But then again who knows? Even if e did represent my data.frame, that it doesn't convert straight to an array would be fair enough - a data frame has more in it than an array (rownames and colnames) and so maybe life shouldn't be this easy. However I still can't work out how to perform the conversion. The documentation is a bit too terse for me, though my limited understanding of the headings in the docs implies that this should be possible. Anyone any thoughts?

    Read the article

  • Using Sub-Types And Return Types in Scala to Process a Generic Object Into a Specific One

    - by pr1001
    I think this is about covariance but I'm weak on the topic... I have a generic Event class used for things like database persistance, let's say like this: class Event( subject: Long, verb: String, directobject: Option[Long], indirectobject: Option[Long], timestamp: Long) { def getSubject = subject def getVerb = verb def getDirectObject = directobject def getIndirectObject = indirectobject def getTimestamp = timestamp } However, I have lots of different event verbs and I want to use pattern matching and such with these different event types, so I will create some corresponding case classes: trait EventCC case class Login(user: Long, timestamp: Long) extends EventCC case class Follow( follower: Long, followee: Long, timestamp: Long ) extends EventCC Now, the question is, how can I easily convert generic Events to the specific case classes. This is my first stab at it: def event2CC[T <: EventCC](event: Event): T = event.getVerb match { case "login" => Login(event.getSubject, event.getTimestamp) case "follow" => Follow( event.getSubject, event.getDirectObject.getOrElse(0), event.getTimestamp ) // ... } Unfortunately, this is wrong. <console>:11: error: type mismatch; found : Login required: T case "login" => Login(event.getSubject, event.getTimestamp) ^ <console>:12: error: type mismatch; found : Follow required: T case "follow" => Follow(event.getSubject, event.getDirectObject.getOrElse(0), event.getTimestamp) Could someone with greater type-fu than me explain if, 1) if what I want to do is possible (or reasonable, for that matter), and 2) if so, how to fix event2CC. Thanks!

    Read the article

  • .Net xsd.exe tool doesn't generate all types

    - by Mrchief
    For some reason, MS .Net (v3.5) tool - xsd.exe doesn't generate types when they are not used inside any element. e.g. XSD File (I threw in the complex element to avoid this warning - "Warning: cannot generate classes because no top-level elements with complex type were found."): <?xml version="1.0" encoding="utf-8"?> <xs:schema targetNamespace="http://tempuri.org/XMLSchema.xsd" elementFormDefault="qualified" xmlns="http://tempuri.org/XMLSchema.xsd" xmlns:mstns="http://tempuri.org/XMLSchema.xsd" xmlns:xs="http://www.w3.org/2001/XMLSchema" > <xs:simpleType name="EnumTest"> <xs:restriction base="xs:string"> <xs:enumeration value="item1" /> <xs:enumeration value="item2" /> <xs:enumeration value="item3" /> </xs:restriction> </xs:simpleType> <xs:complexType name="myComplexType"> <xs:attribute name="Name" use="required" type="xs:string"/> </xs:complexType> <xs:element name="myElem" type="myComplexType"></xs:element> </xs:schema> When i run this thru xsd.exe using xsd /c xsdfile.xsd I don't see EnumTest in the generated cs file. Note; Even though I don't use the enum here, but in my actual project, I have cases like this where we send enum's string value as output. How can I force the xsd tool to include these? Or should I switch to some other tool? I work in Visual Studio 2008.

    Read the article

  • How to find the latest row for each group of data

    - by Jason
    Hi All, I have a tricky problem that I'm trying to find the most effective method to solve. Here's a simplified version of my View structure. Table: Audits AuditID | PublicationID | AuditEndDate | AuditStartDate 1 | 3 | 13/05/2010 | 01/01/2010 2 | 1 | 31/12/2009 | 01/10/2009 3 | 3 | 31/03/2010 | 01/01/2010 4 | 3 | 31/12/2009 | 01/10/2009 5 | 2 | 31/03/2010 | 01/01/2010 6 | 2 | 31/12/2009 | 01/10/2009 7 | 1 | 30/09/2009 | 01/01/2009 There's 3 query's that I need from this. I need to one to get all the data. The next to get only the history data (that is, everything but exclude the latest data item by AuditEndDate) and then the last query is to obtain the latest data item (by AuditEndDate). There's an added layer of complexity that I have a date restriction (This is on a per user/group basis) where certain user groups can only see between certain dates. You'll notice this in the where clause as AuditEndDate<=blah and AuditStartDate=blah Foreach publication, select all the data available. select * from Audits Where auditEndDate<='31/03/10' and AuditStartDate='06/06/2009'; foreach publication, select all the data but Exclude the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009' /* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend!=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ Foreach publication, select only the latest data available (by AuditEndDate) select * from Audits left join (select AuditId as aid, publicationID as pid and max(auditEndDate) as pend from Audit where auditenddate <= '31/03/2009'/* user restrict / group by pid) Ax on Ax.pid=Audit.pubid where pend=Audits.auditenddate AND auditEndDate<='31/03/10' and AuditStartDate='06/06/2009' / user restrict */ So at the moment, query 1 and 3 work fine, but query 2 just returns all the data instead of the restriction. Can anyone help me? Thanks jason

    Read the article

  • Multiple calls to data service from SL3?

    - by Chris
    I have an SL3 that makes asynchronous calls to a data service. Basically, there is a treeview that is bound to a collection of objects. The idea is that as a user selects a specific treeviewitem, a call is made to the data service, with a parameter specific to the selected treeviewitem being passed to the corresponding web method in the data service. The data service returns data back to the SL3 client, and the client presents the data to the user. This works well. The problem is that when users start to navigate through the treeview using the arrow keys on their keyboard, they could press the down arrow key, for example, 10 times, and 10 calls will be made to the data service, and then each of the 10 items will be displayed to the user momentarily, until finishing with the data for the most recently selected treeview item. So - onto the question. How can I put in some form of delay, to allow someone to navigate quickly through a treeview, then, once then stop at a certain treeviewitem, a call is made to the data service? Thanks for any suggestions. Chris

    Read the article

  • memcache is not storing data accross requests

    - by morpheous
    I am new to using memcache, so I may be doing something wrong. I have written a wrapper class around memcache. The wrapper class has only static methods, so is a quasi singleton. The class looks something like this: class myCache { private static $memcache = null; private static $initialized = false; public static function init() { if (self::$initialized) return; self::$memcache = new Memcache(); if (self::configure()) //connects to daemon { self::store('foo', 'bar'); } else throw ConnectionError('I barfed'); } public static function store($key, $data, $flag=MEMCACHE_COMPRESSED, $timeout=86400) { if (self::$memcache->get($key)!== false) return self::$memcache->replace($key, $data, $flag, $timeout); return self::$memcache->set($key, $data, $flag, $timeout); } public static function fetch($key) { return self::$memcache->get($key); } } //in my index.php file, I use the class like this require_once('myCache.php'); myCache::init(); echo 'Stored value is: '. myCache::fetch('foo'); The problem is that the myCache::init() method is being executed in full everytime a page is requested. I then remembered that static variables do not maintain state accross page requests. So I decided instead, to store the flag that indicates whether the server contains the start up data (for our purposes, the variable 'foo', with value 'bar') in memcache itself. Once the status flag is stored in memcache itself, It solves the problem of the initialisation data being loaded for every page request (which quite frankly, defeats the purpose of memcache). However, having solved that problem, when I come to fetch the data in memcache, it is empty. I dont understand whats going on. Can anyone clarify how I can store my data once and retrieve it accross page requests? BTW, (just to clarify), the get/set is working correctly, and if I allow memcache to load the initialisation data for each page request, (which is silly), then the data is available in memcache.

    Read the article

  • Getting zeros between data while reading a binary file in C

    - by indiajoe
    I have a binary data which I am reading into an array of long integers using a C programme. hexdump of the binary data shows, that after first few data points , it starts again at a location 20000 hexa adresses away. hexdump output is as shown below. 0000000 0000 0000 0000 0000 0000 0000 0000 0000 * 0020000 0000 0000 0053 0000 0064 0000 006b 0000 0020010 0066 0000 0068 0000 0066 0000 005d 0000 0020020 0087 0000 0059 0000 0062 0000 0066 0000 ........ and so on... But when I read it into an array 'data' of long integers. by the typical fread command fread(data,sizeof(*data),filelength/sizeof(*data),fd); It is filling up with all zeros in my data array till it reaches the 20000 location. After that it reads in data correctly. Why is it reading regions where my file is not there? Or how will I make it read only my file, not anything inbetween which are not in file? I know it looks like a trivial problem, but I cannot figure it out even after googling one night.. Can anyone suggest me where I am doing it wrong? Other Info : I am working on a gnu/linux machine. (slax-atma distro to be specific) My C compiler is gcc.

    Read the article

  • Returning HTML in the JS portion of a respond_to block throws errors in IE

    - by Horace Loeb
    Here's a common pattern in my controller actions: respond_to do |format| format.html {} format.js { render :layout => false } end I.e., if the request is non-AJAX, I'll send the HTML content in a layout on a brand new page. If the request is AJAX, I'll send down the same content, but without a layout (so that it can be inserted into the existing page or put into a lightbox or whatever). So I'm always returning HTML in the format.js portion, yet Rails sets the Content-Type response header to text/javascript. This causes IE to throw this fun little error message: Of course I could set the content-type of the response every time I did this (or use an after_filter or whatever), but it seems like I'm trying to do something relatively standard and I don't want to add additional boilerplate code. How do I fix this problem? Alternatively, if the only way to fix the problem is to change the content-type of the response, what's the best way to achieve the behavior I want (i.e., sending down content with layout for non-AJAX and the same content without a layout for AJAX) without having to deal with these errors? Edit: This blog post has some more info

    Read the article

  • AJAX Post Not Sending Data?

    - by Jascha
    I can't for the life of me figure out why this is happening. This is kind of a repost, so forgive me, but I have new data. I am running a javascript log out function called logOut() that has make a jQuery ajax call to a php script... function logOut(){ var data = new Object; data.log_out = true; $.ajax({ type: 'POST', url: 'http://www.mydomain.com/functions.php', data: data, success: function() { alert('done'); } }); } the php function it calls is here: if(isset($_POST['log_out'])){ $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('logOutSuccess')"; $connection->runQuery($query); // <-- my own database class... // omitted code that clears session etc... die(); } Now, 18 hours out of the day this works, but for some reason, every once in a while, the POST data will not trigger my query. (this will last about an hour or so). I figured out the post data is not being set by adding this at the end of my script... $query = "INSERT INTO `token_manager` (`ip_address`) VALUES('POST FAIL')"; $connection->runQuery($query); So, now I know for certain my log out function is being skipped because in my database is the following data: if it were NOT being skipped, my data would show up like this: I know it is being skipped for two reasons, one the die() at the end of my first function, and two, if it were a success a "logOutSuccess" would be registered in the table. Any thoughts? One friend says it's a janky hosting company (hostgator.com). I personally like them because they are cheap and I'm a fan of cpanel. But, if that's the case??? Thanks in advance. -J

    Read the article

  • Partial template specialization for more than one typename

    - by Matt Joiner
    In the following code, I want to consider functions (Ops) that have void return to instead be considered to return true. The type Retval, and the return value of Op are always matching. I'm not able to discriminate using the type traits shown here, and attempts to create a partial template specialization based on Retval have failed due the presence of the other template variables, Op and Args. How do I specialize only some variables in a template specialization without getting errors? Is there any other way to alter behaviour based on the return type of Op? template <typename Retval, typename Op, typename... Args> Retval single_op_wrapper( Retval const failval, char const *const opname, Op const op, Cpfs &cpfs, Args... args) { try { CallContext callctx(cpfs, opname); Retval retval; if (std::is_same<bool, Retval>::value) { (callctx.*op)(args...); retval = true; } else { retval = (callctx.*op)(args...); } assert(retval != failval); callctx.commit(cpfs); return retval; } catch (CpfsError const &exc) { cpfs_errno_set(exc.fserrno); LOGF(Info, "Failed with %s", cpfs_errno_str(exc.fserrno)); } return failval; }

    Read the article

  • Cross-domain data access in JavaScript

    - by vit
    We have an ASP.Net application hosted on our network and exposed to a specific client. This client wants to be able to import data from their own server into our application. The data is retrieved with an HTTP request and is CSV formatted. The problem is that they do not want to expose their server to our network and are requesting the import to be done on the client side (all clients are from the same network as their server). So, what needs to be done is: They request an import page from our server The client script on the page issues a request to their server to get CSV formatted data The data is sent back to our application This is not a challenge when both servers are on the same domain: a simple hidden iframe or something similar will do the trick, but here what I'm getting is a cross-domain "access denied" error. They also refuse to change the data format to return JSON or XML formatted data. What I tried and learned so far is: Hidden iframe -- "access denied" XMLHttpRequest -- behaviour depends on the browser security settings: may work, may work while nagging a user with security warnings, or may not work at all Dynamic script tags -- would have worked if they could have returned data in JSON format IE client data binding -- the same "access denied" error Is there anything else I can try before giving up and saying that it will not be possible without exposing their server to our application, changing their data format or changing their browser security settings? (DNS trick is not an option, by the way).

    Read the article

  • Exporting de-aggregated data

    - by Ben
    I'm currently working on a data export feature for a survey application. We are using SQL2k8. We store data in a normalized format: QuestionId, RespondentId, Answer. We have a couple other tables that define what the question text is for the QuestionId and demographics for the RespondentId... Currently I'm using some dynamic SQL to generate a pivot that joins the question table to the answer table and creates an export, its working... The problem is that it seems slow and we don't have that much data (less than 50k respondents). Right now I'm thinking "why am I 'paying' to de-aggregate the data for each query? Why don't I cache that?" The data being exported is based on dynamic criteria. It could be "give me respondents that completed on x date (or range)" or "people that like blue", etc. Because of that, I think I have to cache at the respondent level, find out what respondents are being exported and then select their combined cached de-aggregated data. To me the quick and dirty fix is a totally flat table, RespondentId, Question1, Question2, etc. The problem is, we have multiple clients and that doesn't scale AND I don't want to have to maintain the flattened table as the survey changes. So I'm thinking about putting an XML column on the respondent table and caching the results of a SELECT * FROM Data FOR XML AUTO WHERE RespondentId = x. With that in place, I would then be able to get my export with filtering and XML calls into the XML column. What are you doing to export aggregated data in a flattened format (CSV, Excel, etc)? Does this approach seem ok? I worry about the cost of XML functions on larger result sets (think SELECT RespondentId, XmlCol.value('//data/question_1', 'nvarchar(50)') AS [Why is there air?], XmlCol.RinseAndRepeat)... Is there a better technology/approach for this? Thanks!

    Read the article

  • changing the serialization procedure for a graph of objects (.net framework)

    - by pierusch
    Hello I'm developing a scientific application using .net framework. The application depends heavily upon a large data structure (a tree like structure) that has been serialized using a standard binaryformatter object. The graph structure looks like this: <serializable()>Public class BigObjet inherits list(of smallObject) end class <serializable()>public class smallObject inherits list(of otherSmallerObjects) end class ... The binaryFormatter object does a nice job but it's not optimized at all and the entire data structure reaches around 100Mb on my filesystem. Deserialization works too but it's pretty slow (around 30seconds on my quad core). I've found a nice .dll on codeproject (see "optimizing serialization...") so I wrote a modified version of the classes above overriding the default serialization/deserialization procedure reaching very good results. The problem is this: I can't lose the data previosly serialized with the old version and I'd like to be able to use the new serialization/deserialization method. I have some ideas but I'm pretty sure someone will be able to give me a proper and better advice ! use an "helper" graph of objects who takes care of the entire serialization/deserialization procedure reading data from the old format and converting them into the classes I nedd. This could work but the binaryformatter "needs" to know the types being serialized so........ :( modify the "old" graph to include a modified version of serialization procedure...so I'll be able to deserialize old file and save them with the new format......this doesn't sound too good imho. well any help will be higly highly appreciated :)

    Read the article

  • How to deal with a flaw in System.Data.DataTableExtensions.CopyToDataTable()

    - by andy
    Hey guys, so I've come across something which is perhaps a flaw in the Extension method .CopyToDataTable. This method is used by Importing (in VB.NET) System.Data.DataTableExtensions and then calling the method against an IEnumerable. You would do this if you want to filter a Datatable using LINQ, and then restore the DataTable at the end. i.e: Imports System.Data.DataRowExtensions Imports System.Data.DataTableExtensions Public Class SomeClass Private Shared Function GetData() As DataTable Dim Data As DataTable Data = LegacyADO.NETDBCall Data = Data.AsEnumerable.Where(Function(dr) dr.Field(Of Integer)("SomeField") = 5).CopyToDataTable() Return Data End Function End Class In the example above, the "WHERE" filtering might return no results. If this happens CopyToDataTable throws an exception because there are no DataRows. Why? The correct behavior should be to return a DataTable with Rows.Count = 0. Can anyone think of a clean workaround to this, in such a way that whoever calls CopyToDataTable doesn't have to be aware of this issue? System.Data.DataTableExtensions is a Static Class so I can't override the behavior....any ideas? Have I missed something? cheers UPDATE: I have submitted this as an issue to Connect. I would still like some suggestions, but if you agree with me, you could vote up the issue at Connect via the link above cheers

    Read the article

  • Java: Typecasting to Generics

    - by bguiz
    This method that uses method-level generics, that parses the values from a custom POJO, JXlistOfKeyValuePairs (which is exactly that). The only thing is that both the keys and values in JXlistOfKeyValuePairs are Strings. This method wants to taken in, in addition to the JXlistOfKeyValuePairs instance, a Class<T> that defines which data type to convert the values to (assume that only Boolean, Integer and Float are possible). It then outputs a HashMap with the specified type for the values in its entries. This is the code that I have got, and it is obviously broken. private <T extends Object> Map<String, T> fromListOfKeyValuePairs(JXlistOfKeyValuePairs jxval, Class<T> clasz) { Map<String, T> val = new HashMap<String, T>(); List<Entry> jxents = jxval.getEntry(); T value; String str; for (Entry jxent : jxents) { str = jxent.getValue(); value = null; if (clasz.isAssignableFrom(Boolean.class)) { value = (T)(Boolean.parseBoolean(str)); } else if (clasz.isAssignableFrom(Integer.class)) { value = (T)(Integer.parseInt(str)); } else if (clasz.isAssignableFrom(Float.class)) { value = (T)(Float.parseFloat(str)); } else { logger.warn("Unsupported value type encountered in key-value pairs, continuing anyway: " + clasz.getName()); } val.put(jxent.getKey(), value); } return val; } This is the bit that I want to solve: if (clasz.isAssignableFrom(Boolean.class)) { value = (T)(Boolean.parseBoolean(str)); } else if (clasz.isAssignableFrom(Integer.class)) { value = (T)(Integer.parseInt(str)); } I get: Inconvertible types required: T found: Boolean Also, if possible, I would like to be able to do this with more elegant code, avoiding Class#isAssignableFrom. Any suggestions? Sample method invocation: Map<String, Boolean> foo = fromListOfKeyValuePairs(bar, Boolean.class);

    Read the article

  • Best Practice: Protecting Personally Identifiable Data in a ASP.NET / SQL Server 2008 Environment

    - by William
    Thanks to a SQL injection vulnerability found last week, some of my recommendations are being investigated at work. We recently re-did an application which stores personally identifiable information whose disclosure could lead to identity theft. While we read some of the data on a regular basis, the restricted data we only need a couple of times a year and then only two employees need it. I've read up on SQL Server 2008's encryption function, but I'm not convinced that's the route I want to go. My problem ultimately boils down to the fact that we're either using symmetric keys or assymetric keys encrypted by a symmetric key. Thus it seems like a SQL injection attack could lead to a data leak. I realize permissions should prevent that, permissions should also prevent the leaking in the first place. It seems to me the better method would be to asymmetrically encrypt the data in the web application. Then store the private key offline and have a fat client that they can run the few times a year they need to access the restricted data so the data could be decrypted on the client. This way, if the server get compromised, we don't leak old data although depending on what they do we may leak future data. I think the big disadvantage is this would require re-writing the web application and creating a new fat application (to pull the restricted data). Due to the recent problem, I can probably get the time allocated, so now would be the proper time to make the recommendation. Do you have a better suggestion? Which method would you recommend? More importantly why?

    Read the article

  • IPad SQLite Push and Pull Data from external MS SQL Server DB

    - by MattyD
    This carries on from my previous post (http://stackoverflow.com/questions/4182664/ipad-app-pull-and-push-relational-data). My plan is that when the ipad application starts I am going to pull data (config data i.e. Departments, Types etc etc relational data that is used across the system) from a webhosted MS SQL Server DB via a webservice and populate it into an SQL Lite DB on the IPad. Then when I load a listing I will pull the data over the line again via a webservice and populate it into the SQL Lite db on the ipad (than just run select commands to populate the listing). My questions are: 1. What is the most efficient way to transfer data across the line via the web? Everyone seems to do it a different way. My idea is that I will have a webService for each type of data pull (e.g. RetrieveContactListing) that will query the db and than convert that data into "something" to send across the line. My question really is what is the "something" that it should be converting into? 2. Everyone talks about odata services. Is this suited for applications where complex read and writes are needed? Ive created a simple iphone app before that talked to an sql server db (i just sent my own structured xml across the line) but now with this app the data calls are going to be a lot larger so efficiency is key.

    Read the article

  • Load some data from database and hide it somewhere in a web page

    - by kwokwai
    Hi all, I am trying to load some data (which may be up to a few thousands words) from the database, and store the data somewhere in a html web page for comparing the data input by users. I am thinking to load the data to a Textarea under Div tag and hide the the data: <Div id="reference" style="Display:none;"> <textarea rows="2" cols="20" id="database"> html, htm, php, asp, jsp, aspx, ctp, thtml, xml, xsl... </textarea> </Div> <table border=0 width="100%"> <tr> <td>Username</td> <td> <div id="username"> <input type="text" name="data" id="data"> </div> </td> </tr> </table> <script> $(document).ready(function(){ //comparing the data loaded from database with the user's input if($("#data").val()==$("#database").val()) {alert("error");} }); </script> I am not sure if this is the best way to do it, so could you give me some advice and suggest your methods please.

    Read the article

  • Passing $_GET or $_POST data to PHP script that is run with wget

    - by Matt
    Hello, I have the following line of PHP code which works great: exec( 'wget http://www.mydomain.com/u1.php /dev/null &' ); u1.php acts to do various types of maintenance on my server and the above command makes it happen in the background. No problems there. But I need to pass variable data to u1.php before it's executed. I'd like to pass POST data preferably, but could accommodate GET or SESSION data if POST isn't an option. Basically the type of data being passed is user-specific and will vary depending on who is logged in to the site and triggering the above code. I've tried adding the GET data to the end of the URL and that didn't work. So how else might I be able to send the data to u1.php? POST data preferred, SESSION data would work as well (but I tried this and it didn't pick up the logged in user's session data). GET would be a last resort. Thanks!

    Read the article

< Previous Page | 214 215 216 217 218 219 220 221 222 223 224 225  | Next Page >