Search Results

Search found 837 results on 34 pages for 'structured'.

Page 26/34 | < Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >

  • Handle multiple db updates from c# in SQL Server 2008

    - by joeriks
    I like to find a way to handle multiple updates to a sql db (with one singe db roundtrip). I read about table-valued parameters in SQL Server 2008 http://www.codeproject.com/KB/database/TableValueParameters.aspx which seems really useful. But it seems I need to create both a stored procedure and a table type to use it. Is that true? Perhaps due to security? I would like to run a text query simply like this: var sql = "INSERT INTO Note (UserId, note) SELECT * FROM @myDataTable"; var myDataTable = ... some System.Data.DataTable ... var cmd = new System.Data.SqlClient.SqlCommand(sql, conn); var param = cmd.Parameters.Add("@myDataTable", System.Data.SqlDbType.Structured); param.Value=myDataTable; cmd.ExecuteNonQuery(); So A) do I have to create both a stored procedure and a table type to use TVP's? and B) what alternative method is recommended to send multiple updates (and inserts) to SQL Server?

    Read the article

  • ASP.NET, C#: timeout when trying to Transaction.Commit() to database; potential deadlock?

    - by user1843921
    I have a web page that has coding structured somewhat as follows: SqlConnection conX =new SqlConnection(blablabla); conX.Open(); SqlTransaction tran=conX.BeginTransaction(); try{ SqlCommand cmdInsert =new SqlCommand("INSERT INTO Table1(ColX,ColY) VALUES @x,@y",conX); cmdInsert.Transaction=tran; cmdInsert.ExecuteNonQuery(); SqlCommand cmdSelect=new SqlCOmmand("SELECT * FROM Table1",conX); cmdSelect.Transaction=tran; SqlDataReader dtr=cmdSelect.ExecuteReader(); //read stuff from dtr dtr.Close(); cmdInsert=new SqlCommand("UPDATE Table2 set ColA=@a",conX); cmdInsert.Transaction=tran; cmdInsert.ExecuteNonQuery(); //display MiscMessage tran.Commit(); //display SuccessMessage } catch(Exception x) { tran.Rollback(); //display x.Message } finally { conX.Close(); } So, everything seems to work until MiscMessage. Then, after a while (maybe 15-ish seconds?) x.Message pops up, saying that: "Timeout expired. The timeout period elapsed prior to completion of the operation or the server is not responding." So something wrong with my trans.Commit()? The database is not updated so I assume the trans.Rollback works... I have read that deadlocks can cause timeouts...is this problem cause by my SELECT statement selecting from Table1, which is being used by the first INSERT statement? If so, what should I do? If that ain't the problem, what is?

    Read the article

  • How to I pass parameters to Ruby/Python scripts from inside PHP?

    - by Roger
    Hi, everybody. I need to turn HTML into equivalent Markdown-structured text. From what I could discover, I have only two good choices: Python: Aaron Swartz's html2text.py Ruby: Singpolyma's html2markdown.rb As I am programming in PHP, I need to pass the HTML code, call the Ruby/Python Script and receive the output back. I started creating a simple test just to know if my server was configured to run both languages. PHP code: echo exec('./hi.rb'); Ruby code: #!/usr/bin/ruby puts "Hello World!" It worked fine and I am ready to go to the next step. Unfortunately, all I know is that the function is Ruby works like this: HTML2Markdown.new('<h1>HTMLcode</h1>').to_s I don't know how to make PHP pass the string (with the HTML code) to Ruby nor how to make the ruby script receive the variable and pass it back to PHP (after have parsed it into Markdown). Believe it or not: I know less of Python. A folk made a similar question here ("how to call ruby script from php?") but with no practical information to my case. Any help would be a joy - thanks. Rogério Madureira. atipico.com.br

    Read the article

  • Theory: "Lexical Encoding"

    - by _ande_turner_
    I am using the term "Lexical Encoding" for my lack of a better one. A Word is arguably the fundamental unit of communication as opposed to a Letter. Unicode tries to assign a numeric value to each Letter of all known Alphabets. What is a Letter to one language, is a Glyph to another. Unicode 5.1 assigns more than 100,000 values to these Glyphs currently. Out of the approximately 180,000 Words being used in Modern English, it is said that with a vocabulary of about 2,000 Words, you should be able to converse in general terms. A "Lexical Encoding" would encode each Word not each Letter, and encapsulate them within a Sentence. // An simplified example of a "Lexical Encoding" String sentence = "How are you today?"; int[] sentence = { 93, 22, 14, 330, QUERY }; In this example each Token in the String was encoded as an Integer. The Encoding Scheme here simply assigned an int value based on generalised statistical ranking of word usage, and assigned a constant to the question mark. Ultimately, a Word has both a Spelling & Meaning though. Any "Lexical Encoding" would preserve the meaning and intent of the Sentence as a whole, and not be language specific. An English sentence would be encoded into "...language-neutral atomic elements of meaning ..." which could then be reconstituted into any language with a structured Syntactic Form and Grammatical Structure. What are other examples of "Lexical Encoding" techniques? If you were interested in where the word-usage statistics come from : http://www.wordcount.org

    Read the article

  • Odd 'UNION' behavior in an Oracle SQL query

    - by RenderIn
    Here's my query: SELECT my_view.* FROM my_view WHERE my_view.trial in (select 2 as trial_id from dual union select 3 from dual union select 4 from dual) and my_view.location like ('123-%') When I execute this query it returns results which do not conform to the my_view.location like ('123-%') condition. It's as if that condition is being ignored completely. I can even change it to my_view.location IS NULL and it returns the same results, despite that field being not-nullable. I know this query seems ridiculous with the selects from dual, but I've structured it this way to replicate a problem I have when I use a 'WITH' clause (the results of that query are where the selects from dual inline view are). I can modify the query like so and it returns the expected results: SELECT my_view.* FROM my_view WHERE my_view.trial in (2, 3, 4) and my_view.location like ('123-%') Unfortunately I do not know the trial values up front (they are queried for in a 'WITH' clause) so I cannot structure my query this way. What am I doing wrong? I will say that the my_view view is composed of 3 other views whose results are UNION ALL and each of which retrieve some data over a DB Link. Not that I believe that should matter, but in case it does.

    Read the article

  • Full complete MySQL database replication? Ideas? What do people do?

    - by mauriciopastrana
    Currently I have two Linux servers running MySQL, one sitting on a rack right next to me under a 10 Mbit/s upload pipe (main server) and another some couple of miles away on a 3 Mbit/s upload pipe (mirror). I want to be able to replicate data on both servers continuously, but have run into several roadblocks. One of them being, under MySQL master/slave configurations, every now and then, some statements drop (!), meaning; some people logging on to the mirror URL don't see data that I know is on the main server and vice versa. Let's say this happens on a meaningful block of data once every month, so I can live with it and assume it's a "lost packet" issue (i.e., god knows, but we'll compensate). The other most important (and annoying) recurring issue is that, when for some reason we do a major upload or update (or reboot) on one end and have to sever the link, then LOAD DATA FROM MASTER doesn't work and I have to manually dump on one end and upload on the other, quite a task nowadays moving some .5 TB worth of data. Is there software for this? I know MySQL (the "corporation") offers this as a VERY expensive service (full database replication). I am just wondering what people out there do. The way it's structured, we run an automatic failover where if one server is not up, then the main URL just resolves to the other server.

    Read the article

  • Fulfilling strange requirements with CSS (kind of simulating frames)

    - by Bernhard V
    Hi! I'm struggling to find a way to code a site according to our strange requirements. The site should be displayed correctly in all browsers from IE6 to Opera. The website is structured in three parts. It contains a header at the top, a navigation on the left an the rest of the screen should be filled with the content section. The following picture should help you better understand my description. Here comes the kicker: Each of the three sections should be scrollable separately and no browser scrollbar should appear. The page should be displayed similar as if it would use frames. Of course, on a big enough screen, no scroll bars should appear. It doesn't matter which way is used to display the site, although frames aren't an option an divs would be preferred. There are two conditions: The site should always fill the whole browser screen. The header and the content section should reach to the right border of the page, and the navigation as well as the content to the bottom. As soon as the site is scaled down -- whether due to resizing the browser window or due to a smaller resolution -- a scrollbar for every single section should appear, but no "browser scrollbar" for the whole page. The header should always retain it's height and the navigation always it's width. Do you know a way how all this can be achieved? Yours Bernhard

    Read the article

  • autotools: no rule to make target all

    - by Raffo
    I'm trying to port an application I'm developing to autotools. I'm not an expert in writing makefiles and it's a requisite for me to be able to use autotools. In particular, the structure of the project is the following: .. ../src/Main.cpp ../src/foo/ ../src/foo/x.cpp ../src/foo/y.cpp ../src/foo/A/k.cpp ../src/foo/A/Makefile.am ../src/foo/Makefile.am ../src/bar/ ../src/bar/z.cpp ../src/bar/w.cpp ../src/bar/Makefile.am ../inc/foo/ ../inc/bar/ ../inc/foo/A ../configure.in ../Makefile.am The root folder of the project contains a "src" folder containing the main of the program AND a number of subfolders containing the other sources of the program. The root of the project also contains an "inc" folder containing the .h files that are nothing more than the definitions of the classes in "src", thus "inc" reflects the structure of "src". I have written the following configure.in in the root: AC_INIT([PNAME], [1.0]) AC_CONFIG_SRCDIR([src/Main.cpp]) AC_CONFIG_HEADER([config.h]) AC_PROG_CXX AC_PROG_CC AC_PROG_LIBTOOL AM_INIT_AUTOMAKE([foreign]) AC_CONFIG_FILES([Makefile src/Makefile src/foo/Makefile src/foo/A/Makefile src/bar/Makefile]) AC_OUTPUT And the following is ../Makefile.am SUBDIRS = src and then in ../src where the main of the project is contained: bin_PROGRAMS = pname gsi_SOURCES = Main.cpp AM_CPPFLAGS = -I../../inc/foo\ -I../../inc/foo/A \ -I../../inc/bar/ pname_LDADD= foo/libfoo.a bar/libbar.a SUBDIRS = foo bar and in ../src/foo noinst_LIBRARIES = libfoo.a libfoo_a_SOURCES = \ x.cpp \ y.cpp AM_CPPFLAGS = \ -I../../inc/foo \ -I../../inc/foo/A \ -I../../inc/bar And the analogous in src/bar. The problem is that after calling automake and autoconf, when calling "make" the compilation fails. In particular, the program enters the directory src, then foo and creates libfoo.a, but the same fail for libbar.a, with the following error: Making all in bar make[3]: Entering directory `/user/Raffo/project/src/bar' make[3]: *** No rule to make target `all'. Stop. I have read the autotools documentation, but I'm not able to find a similar example to the one I am working on. Unfortunately I can't change the directory structure as this is a fixed requisite of the project I'm working on. I don't know if you can help me or give me any hint, but maybe you can guess the error or give me a link to a similar structured example. Thank you.

    Read the article

  • Fast, easy, and secure method to perform DB actions with GET

    - by rob - not a robber
    Hey All, Sort of a methods/best practices question here that I am sure has been addressed, yet I can't find a solution based on the vague search terms I enter. I know starting off the question with "Fast and easy" will probably draw out a few sighs, so my apologies. Here is the deal. I have a logged in area where an ADMIN can do a whole host of POST operations to input data relating to their profile. The way I have data structured is pretty distinct and well segmented in most tables as it relates to the ID of the admin. Now, I have a table where I dump one type of data into and differentiate this data by assigning the ADMIN's unique ID to each record. In other words, all ADMINs have this one type of data writing to this table. I just differentiate by the ADMIN ID with each record. I was planning on letting the ADMIN remove these records by clicking on a link with a query string - obviously using GET. Obviously, the query structure is in the link so any logged in admin could then exploit the URL and delete a competitor's records. Is the only way to safely do this through POST or should I pass through the session info that includes password and validate it against the ADMIN ID that is requesting the delete? This is obviously much more work for me. As they said in the auto repair biz I used to work in... there are 3 ways to do a job: Fast, Good, and Cheap. You can only have two at a time. Fast and cheap will not be good. Good and cheap will not have fast turnaround. Fast and good will NOT be cheap. haha I guess that applies here... can never have Fast, Easy and Secure all at once ;) Thanks in advance...

    Read the article

  • Where is the Open Source alternative to WPF?

    - by Evan Plaice
    If we've learned anything from HTML/CSS it's that, declarative languages (like XML) work best to describe User Interfaces because: It's easy to build code preprocessors that can template the code effectively. The code is in a well defined well structured (ideally) format so it's easy to parse. The technology to effectively parse or crawl an XML based source file already exists. The UIs scripted code becomes much simpler and easier to understand. It simple enough that designers are able to design the interface themselves. Programmers suck at creating UIs so it should be made easy enough for designers. I recently took a look at the meat of a WPF application (ie. the XAML) and it looks surprisingly familiar to the declarative language style used in HTML. It's blindingly apparent to me that the current state of desktop UI development is largely fractionalized, otherwise there wouldn't be so much duplicated effort in the domain of user interfaces (IE. GTK, XUL, Qt, Winforms, WPF, etc). There are 45 GUI platforms for Python alone It's painfully obvious to me that there should be a general purpose, open source, standardized, platform independent, markup language for designing desktop GUIs. Much like what the W3C made HTML/CSS into. WPF, or more specifically XAML seems like a pretty likely step in the right direction. Why hasn't anyone in the Open Source community (AFAIK) even scratched the surface of this issue. Now that the 'browser wars' are over should we look forward to a future of 'desktop gui wars?' Note: This topic is relatively subjective in the attempt to be 'future-thinking.' I think that desktop GUI development in its current state sucks ((really)hard) and, even though WPF is still in it's infancy, it presents a likely solution to the problem. Has no one in the OS community looked into developing something similar because they don't see the value, or because it's not worth the effort?

    Read the article

  • Indexing and Searching Over Word Level Annotation Layers in Lucene

    - by dmcer
    I have a data set with multiple layers of annotation over the underlying text, such as part-of-tags, chunks from a shallow parser, name entities, and others from various natural language processing (NLP) tools. For a sentence like The man went to the store, the annotations might look like: Word POS Chunk NER ==== === ===== ======== The DT NP Person man NN NP Person went VBD VP - to TO PP - the DT NP Location store NN NP Location I'd like to index a bunch of documents with annotations like these using Lucene and then perform searches across the different layers. An example of a simple query would be to retrieve all documents where Washington is tagged as a person. While I'm not absolutely committed to the notation, syntactically end-users might enter the query as follows: Query: Word=Washington,NER=Person I'd also like to do more complex queries involving the sequential order of annotations across different layers, e.g. find all the documents where there's a word tagged person followed by the words arrived at followed by a word tagged location. Such a query might look like: Query: "NER=Person Word=arrived Word=at NER=Location" What's a good way to go about approaching this with Lucene? Is there anyway to index and search over document fields that contain structured tokens?

    Read the article

  • Is there a way to touch-enable scrolling in a WPF ScrollViewer?

    - by Brian Sullivan
    I'm trying to create a form in a WPF application that will allow the user to use iPhone-like gestures to scroll through the available fields. So, I've put all my form controls inside a StackPanel inside a ScrollViewer, and the scrollbar shows up as expected when there are too many elements to be shown on the screen. However, when I try to test this on my touch-enabled device, a panning gesture (placing a finger down on the surface and dragging it upward) does not move the viewable area down as I would expect. When I simply put a number of elements inside a ListView, the touch gestures work just fine. Is there any way to enable the same kind of behavior in a ScrollViewer? My window is structured like this: <Window x:Class="TestTouchScrolling.MainWindow" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="350" Width="525" Loaded="Window_Loaded"> <Grid> <ScrollViewer Name="viewer" VerticalScrollBarVisibility="Auto"> <StackPanel Name="panel"> <StackPanel Orientation="Horizontal"> <Label>Label 1:</Label> <TextBox Name="TextBox1"></TextBox> </StackPanel> <StackPanel Orientation="Horizontal"> <Label>Label 2:</Label> <TextBox Name="TextBox2"></TextBox> </StackPanel> <StackPanel Orientation="Horizontal"> <Label>Label 3:</Label> <TextBox Name="TextBox3"></TextBox> </StackPanel> <!-- Lots more like these --> </StackPanel> </ScrollViewer> </Grid>

    Read the article

  • looking to streamline my RSS feed mashup

    - by Mark Cejas
    Hello crafty developers, I have aggregated RSS feeds from various sources with RSSowl, fetching directly from the social mention API. The RSS feeds are categorized into the following major categories: blogs, news, twitter, Q&A and social networking sites. Each major category is nested with a common group of RSS feeds that represent a particular client/brand ontology. Merging these feeds into the RSSowl reader application, allows me to conduct and save refined search queries (from the aggregated data) into a single file - that I can then tag and further segment for analysis. This scheme is utilized for my own research needs and has helped me considerably. However, I find this RSS mashup scheme kinda clumsy, it requires quite a bit of time to initially organize all of the feeds and I would like to be able to do further natural language processing to the data as well as eventually be able to rank the collected list of URL's into some order of media prominence - right I don't want to pay the ridiculous radian6 web analytics fees, when my intuition is telling me that with a bit of 'elbow grease' I can maybe leverage some available resources online to develop a functional low scale web mining application and get some good intelligence from it. I am now starting to learn a little about computer science - my background is in physical science/statistics so is my thinking in the right track? So, I guess I am imagining an application that allows me to query in a refined manner. A manner that allows me to search for keyword combinations, applying AND/OR operators, selectively focus my queries into particular sources - like a collection of blogs or twitter, or social networking communities, then save the results of my queries into a structured format that can then be manipulated and explored. Am I dreaming? I just had to get all of this out. any bit of advice and insight would be hugely appreciated. my best, Mark

    Read the article

  • How to create an AST with ANTLR from a hierarchical key-value syntax

    - by Brabster
    I've been looking at parsing a key-value data format with ANTLR. Pretty straightforward, but the keys represent a hierarchy. A simplified example of my input syntax: /a/b/c=2 /a/b/d/e=3 /a/b/d/f=4 In my mind, this represents a tree structured as follows: (a (b (= c 2) (d (= e 3) (= f 4)))) The nearest I can get is to use the following grammar: /* Parser Rules */ start: (component NEWLINE?)* EOF -> (component)*; component: FORWARD_SLASH ALPHA_STRING component -> ^(ALPHA_STRING component) | FORWARD_SLASH ALPHA_STRING EQUALS value -> ^(EQUALS ALPHA_STRING value); value: ALPHA_STRING; /* Lexer Rules */ NEWLINE : '\r'? '\n'; ALPHA_STRING : ('a'..'z'|'A'..'Z'|'0'..'9')+; EQUALS : '='; FORWARD_SLASH : '/'; Which produces: (a (b (= c 2))) (a (b (d (= e 3)))) (a (b (d (= f 4)))) I'm not sure whether I'm asking too much from a generic tool such as ANTLR here, and this is as close I can get with this approach. That is, from here I consume the parts of the tree and create the data structure I want by hand. So - can I produce the tree structure I want directly from a grammar? If so, how? If not, why not - is it a technical limitation in ANTLR or is it something more CS-y to do with the type of language involved?

    Read the article

  • What can i use to journal writes to file system

    - by Dmitry
    Hello, all I need to track all writes to files in order to have synchronized version of files on different place (server or just other directory, not considerable). Let it: all files located in same directory feel free to create some system files (e.g. SomeFileName.Ext~temp-data) no one have concurrent access to synced directory; nobody spoil ours meta-files or change real-files before we do postponed writes (like a commits) do not to care recovering "local" changes in case of crash; system can just rolled back to state of "server" by simple copy from it significant to have it transparent to use (so programmer must just call ordinary fopen(), read(), write()) It must be guaranteed that copy of files which "server" have is consistent. That is whole files scope existed in some moment of time. They may be sufficiently outdated but it must be fair snapshot of all files at some time. As i understand i should overload writing logic to collect data in order sent changes to "server". For example writing to temporary File~tmp. And so i have to overload reads in order program could read actual data of file. It would be great if you suggest some existing library (java or c++, it is unimportant) or solution (VCS customizing?). Or give hints how should i write it by myself. edit: After some reading i have more precision requirements: I need COW (Copy-on-write) wrapper for fopen(),fwrite(),.. or interceptor (hook) WriteFile() and other FS api system calls. Log-structured file system in userspace would be a alternative too.

    Read the article

  • Class design when working with dataset

    - by MC
    If you have to retrieve data from a database and bring this dataset to the client, and then allow the user to manipulate the data in various ways before updating the database again, what is a good class design for this if the data tables will not have a 1:1 relationship with the class objects? Here are some I came up with: Just manipulate the DataSet itself on the client and then send it back to the database as is. This will work though obviously the code will be very dirty and not well-structured. Same as #1, but wrap the dataset code around classes. What I mean is that you may have a class that takes a dataset or a datatable in its constructor, and then provides public methods and properties to simplify the code. Inside these methods and properties it will be reading or manipulating the dataset. To update the database afterwards will be easy because you already have the updated dataset. Get rid of the dataset entirely on the client, convert to objects, then convert back to a dataset when needing to update the database. Is there any good resources where I can find information on this?

    Read the article

  • Website. AJAX and FIREFOX problems. I dont think Firefox likes ajax..?

    - by DJDonaL3000
    Working on an AJAX website (HTML,CSS,JavaScript, AJAX, PHP, MySQL). I have multiple javascript functions which take rows from mysql, wrap them in html tags, and embed them in the HTML (the usual usage of AJAX). THE PROBLEM: Everything is working perfect, except when I run the site with Firefox (for once its not InternetExplorer causing the trouble). The site is currently in the developmental stage, so its offline, but running on the localhost (WampServer, apache, Windows XP SP3,VISTA,7). All other cross-browser conflicts have been removed, and works perfectly on all major browsers including IE, Chrome, Opera and Safari, but I get absolutely nothing from the HTTPRequest (AJAX) if the browser is Firefox. All browsers have the latest versions. THE CODE: I have a series of javascript functions, all of which are structured as follows: function getDatay(){ var a = document.getElementById( 'item' ).innerHTML; var ajaxRequest; try{//Browser Support Code: // code for IE7+, Firefox, Chrome, Opera, Safari: ajaxRequest = new XMLHttpRequest(); } catch (e){ // code for IE6, IE5: try{ ajaxRequest = new ActiveXObject("Msxml2.XMLHTTP"); } catch (e) { try{ ajaxRequest = new ActiveXObject("Microsoft.XMLHTTP"); } catch (e){ // Something went wrong alert("Your browser is not compatible - Browser Incompatibility Issue."); return false; } } } // Create a function that will receive data sent from the server ajaxRequest.onreadystatechange = function(){ if(ajaxRequest.readyState < 4){ document.getElementById( 'theDiv' ).innerHTML = 'LOADING...'; } if(ajaxRequest.readyState == 4){ document.getElementById( 'theDiv' ).innerHTML = ajaxRequest.responseText; } } //Post vars to PHP Script and wait for response: var url="01_retrieve_data_7.php"; url=url+"?a="+a; ajaxRequest.open("POST", url, false);//must be false here to wait for ajaxRequest to complete. ajaxRequest.send(null); } My money is on the final five lines of code being the cause of the problem. Any suggestions how to get Firefox and AJAX working together are most welcome...

    Read the article

  • What are good strategies for organizing single class per query service layer?

    - by KallDrexx
    Right now my Asp.net MVC application is structured as Controller - Services - Repositories. The services consist of aggregate root classes that contain methods. Each method is a specific operation that gets performed, such as retrieving a list of projects, adding a new project, or searching for a project etc. The problem with this is that my service classes are becoming really fat with a lot of methods. As of right now I am separating methods out into categories separated by #region tags, but this is quickly becoming out of control. I can definitely see it becoming hard to determine what functionality already exists and where modifications need to go. Since each method in the service classes are isolated and don't really interact with each other, they really could be more stand alone. After reading some articles, such as this, I am thinking of following the single query per class model, as it seems like a more organized solution. Instead of trying to figure out what class and method you need to call to perform an operation, you just have to figure out the class. My only reservation with the single query per class method is that I need some way to organize the 50+ classes I will end up with. Does anyone have any suggestions for strategies to best organize this type of pattern?

    Read the article

  • The perfect web UI framework (with Microsoft stack?) - architecture question?

    - by Igorek
    I'm looking for suggestions for the following issue, and I realize there is really not going to be a perfect answer to my question: I have a UI built in WinForms.NET (v4.0 framework) with WCF back-end and EF4 model objects, that I am looking to port to the web. UI is not huge and is not super complex and is structured well. But it is not a super simple system either. I am looking to pick a technology stack for the web-frontend that will target desktop & partially mobile platforms, provide a good development platform to build on, and facilitate code reuse across UI and back-end tiers... I would rather avoid: custom coding of UI-centric scripts, because they are hard to debug, non-compiled, usually a maintenance nightmare, almost always start to contain business logic, and duplicate some of the logic that back-end tiers have (especially validation) custom-coding for Desktop Web and Mobile Web UI's separately (although I realize that mobile web UI will likely contain fewer of data-entry screens and more reporting screens) non-.NET technology stacks I would love to: target the reporting capabilities of the system toward mobile web browsers not have to write a single line of script (javascript, jquery, etc.) utilize a good collection of controls that produces an elegant UI use .NET for everything The way I see it right now, I need to re-write this app in Silverlight, utilize a 3rd party UI framework like Telerik, and re-do the reports UI again for mobile platforms separately. However, I'm rather concerned about the shelf-life of Silverlight and the needed to deploy a different architecture to deal with mobile platform. Is there an ASP.NET/MVC/Ajax architecture/framework/library that would allow me to get at the power of .NET and without painful (imho) client-side scripting, while providing a decent user experience Thank you

    Read the article

  • What are the responsibilities of the data layer?

    - by alimac83
    I'm working on a project where I had to add a data layer to my application. I've always thought that the data layer is purely responsible for CRUD functions ie. shouldn't really contain any logic but should simply retrieve data for the business layer to manipulate. However I'm a little confused with my project because I'm not sure whether I've structured my app correctly for this scenario. Basically I'm trying to retrieve a list of products from the database that fall within a certain pricing threshold. At the moment I have a function in my data layer that basically returns all products where price min threshold and price < max threshold. But it got me thinking that maybe this is incorrect. Should the data layer simply return a list of ALL products and then the business logic do the filtering? I'm pretty confused over whether the data layer should simply provide methods that allow the business layer to get raw data or whether it should be responsible for getting filtered data too? If anyone has an article or something explaining this in detail it'd be very helpful. Thanks

    Read the article

  • Project Euler #14 and memoization in Clojure

    - by dbyrne
    As a neophyte clojurian, it was recommended to me that I go through the Project Euler problems as a way to learn the language. Its definitely a great way to improve your skills and gain confidence. I just finished up my answer to problem #14. It works fine, but to get it running efficiently I had to implement some memoization. I couldn't use the prepackaged memoize function because of the way my code was structured, and I think it was a good experience to roll my own anyways. My question is if there is a good way to encapsulate my cache within the function itself, or if I have to define an external cache like I have done. Also, any tips to make my code more idiomatic would be appreciated. (use 'clojure.test) (def mem (atom {})) (with-test (defn chain-length ([x] (chain-length x x 0)) ([start-val x c] (if-let [e (last(find @mem x))] (let [ret (+ c e)] (swap! mem assoc start-val ret) ret) (if (<= x 1) (let [ret (+ c 1)] (swap! mem assoc start-val ret) ret) (if (even? x) (recur start-val (/ x 2) (+ c 1)) (recur start-val (+ 1 (* x 3)) (+ c 1))))))) (is (= 10 (chain-length 13)))) (with-test (defn longest-chain ([] (longest-chain 2 0 0)) ([c max start-num] (if (>= c 1000000) start-num (let [l (chain-length c)] (if (> l max) (recur (+ 1 c) l c) (recur (+ 1 c) max start-num)))))) (is (= 837799 (longest-chain))))

    Read the article

  • Address String Split in javscript

    - by Arasoi
    Ok folks I have bombed around for a few days trying to find a good solution for this one. What I have is two possible address formats. 28 Main St Somecity, NY 12345-6789 or Main St Somecity, Ny 12345-6789 What I need to do Is split both strings down into an array structured as such address[0] = HousNumber address[1] = Street address[2] = City address[3] = State address[4] = ZipCode My major problem is how to account for the lack of a house number. with out having the whole array shift the data up one. address[0] = Street address[1] = City address[2] = State address[3] = ZipCode [Edit] For those that are wondering this is what i am doing atm . (cleaner version) place = response.Placemark[0]; point = new GLatLng(place.Point.coordinates[1],place.Point.coordinates[0]); FCmap.setCenter(point,12); var a = place.address.split(','); var e = a[2].split(" "); var x = a[0].split(" "); var hn = x.filter(function(item,index){ return index == 0; }); var st = x.filter(function(item,index){ return index != 0; }); var street = ''; st.each(function(item,index){street += item + ' ';}); results[0] = new Hash({ FullAddie: place.address, HouseNum: hn[0], Dir: '', Street: street, City: a[1], State: e[1], ZipCode: e[2], GPoint: new GMarker(point), Lat: place.Point.coordinates[1], Lng: place.Point.coordinates[0] }); // End Address Splitting

    Read the article

  • Configuring TeamCity + NUnit unit tests so files can be loaded properly

    - by Dave
    In a nutshell, I have a solution that builds fine in the IDE, and the unit tests all run fine with the NUnit GUI (via the NUnitit VS2008 plugin). However, when I execute my TeamCity build runner, all unit tests that require file access (e.g. for running tests against specific XML files), I just get System.IO.DirectoryNotFoundExceptions. The reason for this is clear: it's looking for those supporting XML files loaded by various unit tests in the wrong folder. The way my unit tests are structured looks like this: +-- project folder +-- unit tests folder +-- test.xml +-- test.cs +-- project file.xaml +-- project file.xaml.cs All of my projects own their own UnitTests folder, which contains the .cs file and any XML files, XML Schemas, etc that are necessary to run the tests. So when I write my test.cs, I have it look for "test.xml" in the code because they are in the same folder (actually, I do something like ....\unit tests\test.xml, but that's kind of silly). As I said before, the tests run great in NUnit. But that's because the unit tests are part of the project. When running the unit tests from TeamCity, I am executing them against the assemblies that get copied to the main app's output folder. These unit test XML files should not be copied willy-nilly to the output folder just to make the tests pass. Can anyone suggest a better method of organizing my unit tests in each project (which are dependencies for the main app), such that I can execute the unit tests from NUnit and from the TeamCity build runner? The only other option I can come up with is to just put the testing XML data in code, rather than loading it from a file. I would rather not do this.

    Read the article

  • MVC3 DropDownListFor selected value does not reset if I set the Id

    - by BarryFanta
    I've read a lot of posts on here regarding the dropdown selected value issues (not showing etc etc...) but mine is the opposite problem. I want the drop down to always reset after the view is returned after a button submits the page through the controller action. So how I've structured it all works but is it possible to reset the dropdown list each time? I can't find a way to do it and I've tried a lot of ways, believe me. My View: @Model.PspSourceModel.PayAccount.PaymentProviderId <br /> @Html.DropDownListFor( x => x.PspSourceModel.PayAccount.PaymentProviderId, new SelectList(Model.PspSourceModel.PaymentProviders, "Value", "Text", "-- please select --"), "-- please select --" My Controller: // I've tried forcing the selected value id here - doesn't effect the dropdownlist still? pspVM.PspSourceModel.PayAccount.PaymentProviderId = 1; return (View(pspVM)); My Webpage shows: 1 (the id I set in the Action) dropdownlist with the id=6 or whatever value was chosen prior to submitting the form. From the questions and answers on SO and the wider web I thought the dropdownlist seems tied to the id you choose but how do I override that to reset the dropdown to 'please select' each time? Thanks in advance.

    Read the article

  • Passing javascript object to webservice via Jquery ajax

    - by kralco626
    I have a webservice that returns an object [WebMethod] public List<User> ContractorApprovals() I also have a webservice that accepcts an object [WebMethod] public bool SaveContractor(Object u) When I make my webservice calls via Jquery: function ServiceCall(method, parameters, onSucess, onFailure) { var parms = "{" + (($.isArray(parameters)) ? parameters.join(',') : parameters) + "}"; // to json $.ajax({ type: "POST", url: "services/"+method, data: parms, contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { if (typeof onSucess == 'function' || typeof onSucess == 'object') onSucess(msg.d); }, error: function(msg, err) { $("#dialog-error").dialog('open');} }); I can call the first one just fine. My onSucess function gets passed a javascript object exactly structured like my User object on the service. However, I am now having trouble getting the object back to the server. I'm accepting Object as a parameter on the server side so I can't inagine there is an issue there. So I'm thinking something is wrong with the parms on the client side but i'm not sure what... I am doing something to the effect ServiceCall("AuthorizationManagerWorkManagement.asmx/ContractorApprovals", "", function(data,args){$("#div").data('user',data[0])}, null) then ServiceCall("AuthorizationManagerWorkManagement.asmx/SaveContractor", $("#div").data('user'), //These also do not work: "{'u': ' + $("#div").data("user") + '}", NOR JSON.stringify({u: userObject}) function(data,args){(alert(data)}, null) I know the first service call works, I can get the data. The second one is causing the "onFailure" method to execute rather then "OnSuccess". Any ideas?

    Read the article

< Previous Page | 22 23 24 25 26 27 28 29 30 31 32 33  | Next Page >