Search Results

Search found 5492 results on 220 pages for 'git fetch'.

Page 123/220 | < Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >

  • Sorting by some field and fetching whole tree from DB

    - by Niaxon
    Hello everyone, I am trying to do file browser in a tree form and have a problem to sort it somehow. I use PHP and MySQL for that. I've created mixed (nested set + adjacency) table 'element' with the following fields: element_id, left_key, right_key, level, parent_id, element_name, element_type (enum: 'folder','file'), element_size. Let's not discuss right now that it is better to move information about element (name, type, size) into other table. Function to scan specified directory and fill table work correctly. Noteworthy, i am adding elements to tree in specific order: folders first and only files. After that i can easily fetch and display whole table on the page using simple query: SELECT * FROM element WHERE 1=1 ORDER BY left_key With the result of that query and another function i can generate correct html code (<ul><li>... and so on). Now back to the question (finally, huh?). I am struggling to add sorting functionality. For example i want to order my result by size. Here i need to keep in my mind whole hierarchy of tree and rule: folders first, files later. I believe i can do that by generating in PHP recursive query: SELECT * FROM element WHERE parent_id = {$parentId} ORDER BY element_type (so folders would be first), size (or name for example) After that for each result which is folder i will send another query to get it's content. Also it's possible to fetch whole tree by left_key and after that sort it in PHP as array but i guess that would be worse :) I wonder if there is better and more efficient way to do such thing?

    Read the article

  • How do I make a function in SQL Server that accepts a column of data?

    - by brandon k
    I made the following function in SQL Server 2008 earlier this week that takes two parameters and uses them to select a column of "detail" records and returns them as a single varchar list of comma separated values. Now that I get to thinking about it, I would like to take this table and application-specific function and make it more generic. I am not well-versed in defining SQL functions, as this is my first. How can I change this function to accept a single "column" worth of data, so that I can use it in a more generic way? Instead of calling: SELECT ejc_concatFormDetails(formuid, categoryName) I would like to make it work like: SELECT concatColumnValues(SELECT someColumn FROM SomeTable) Here is my function definition: FUNCTION [DNet].[ejc_concatFormDetails](@formuid AS int, @category as VARCHAR(75)) RETURNS VARCHAR(1000) AS BEGIN DECLARE @returnData VARCHAR(1000) DECLARE @currentData VARCHAR(75) DECLARE dataCursor CURSOR FAST_FORWARD FOR SELECT data FROM DNet.ejc_FormDetails WHERE formuid = @formuid AND category = @category SET @returnData = '' OPEN dataCursor FETCH NEXT FROM dataCursor INTO @currentData WHILE (@@FETCH_STATUS = 0) BEGIN SET @returnData = @returnData + ', ' + @currentData FETCH NEXT FROM dataCursor INTO @currentData END CLOSE dataCursor DEALLOCATE dataCursor RETURN SUBSTRING(@returnData,3,1000) END As you can see, I am selecting the column data within my function and then looping over the results with a cursor to build my comma separated varchar. How can I alter this to accept a single parameter that is a result set and then access that result set with a cursor?

    Read the article

  • How to keep track of call statistics? C++

    - by tf.rz
    I'm working on a project that delivers statistics to the user. I created a class called Dog, And it has several functions. Speak, woof, run, fetch, etc. I want to have a function that spits out how many times each function has been called. I'm also interested in the constructor calls and destructor calls as well. I have a header file which defines all the functions, then a separate .cc file that implements them. My question is, is there a way to keep track of how many times each function is called? I have a function called print that will fetch the "statistics" and then output them to standard output. I was considering using static integers as part of the class itself, declaring several integers to keep track of those things. I know the compiler will create a copy of the integer and initialize it to a minimum value, and then I'll increment the integers in the .cc functions. I also thought about having static integers as a global variable in the .cc. Which way is easier? Or is there a better way to do this? Any help is greatly appreciated!

    Read the article

  • Can I set NHibernate's default "OrderBy" to be "CreatedDate" not "Id"?

    - by Chris F
    This is an oddball question I figure. Can I get NHibernate to ask SQL to sort data by CreatedDate by default unless I set an OrderBy in my HQL or Criteria? I'm interested in knowing whether this sort can be accomplished at the DB level to avoid bringing in LINQ. The reason is that I use GUIDs for Ids and when I do something like this: Sheet sheet = sheetRepository.Get(_someGUID); IList<SheetLineItems> lineItems = sheet.LineItems; to fetch all of the lineItems, they come back in whatever arbitrary way that SQL sorts that fetch, which I figure is GUID. At some point I'll add ordinals to my line items, but for now, I just want to use CreatedDate as the sort criteria. I don't want to be forced to do: IList<SheetLineItem> lineItems = sheetLineItemRepository.GetAll(_sheetGUID); and then writing that method to sort by CreatedDate. I figure if everything is just sorted on CreatedDate by default, that would be fine, unless specifically requested otherwise.

    Read the article

  • MySql stored procedure not found in PHP

    - by kaupov
    Hello, I have a trouble with MySql stored procedure that calls itself recursively using PHP (CakePHP). Calling it I receive following error: SQL Error: 1305: FUNCTION dbname.GetAdvertCounts does not exist The procedure itself is following: delimiter // DROP PROCEDURE IF EXISTS GetAdvertCounts// CREATE PROCEDURE GetAdvertCounts(IN category_id INT) BEGIN DECLARE no_more_sub_categories, advert_count INT DEFAULT 0; DECLARE sub_cat_id INT; DECLARE curr_sub_category CURSOR FOR SELECT id FROM categories WHERE parent_id = category_id; DECLARE CONTINUE HANDLER FOR NOT FOUND SET no_more_sub_categories = 1; SELECT COUNT(*) INTO advert_count FROM adverts WHERE category_id = category_id; OPEN curr_sub_category; FETCH curr_sub_category INTO sub_cat_id; REPEAT SELECT advert_count + GetAdvertCounts(sub_cat_id) INTO advert_count; FETCH curr_sub_category INTO sub_cat_id; UNTIL no_more_sub_categories = 1 END REPEAT; CLOSE curr_sub_category; SELECT advert_count; END // delimiter ; If I remove or comment out the recursive call, the procedure is working. Any idea what I'm missing here? The categories are 2 level deep.

    Read the article

  • Using an objects date (without time) for a table header instead of an objects date and time (iphone)

    - by billywilliamton
    I've been working on an iphone project and have run into an issue. Currently In the table view where it displays all the objects, I use headers based on the objects datePerformed field. The only problem is that my code apparently creates a header that contains both the date and time resulting in objects not being grouped solely by their date as I intended, but rather based on their date and time. I'm not sure if it matters, but when an object is created I use a date picker to pick the date, but not the time. I was wondering if anyone could give me any suggestions or advice. Here is the code where i set up the fetchedResultsController - (NSFetchedResultsController *)fetchedResultsController { if (fetchedResultsController != nil) { return fetchedResultsController; } // Create and configure a fetch request with the Exercise entity. NSFetchRequest *fetchRequest = [[NSFetchRequest alloc] init]; NSEntityDescription *entity = [NSEntityDescription entityForName:@"Exercise" inManagedObjectContext:managedObjectContext]; [fetchRequest setEntity:entity]; // Create the sort descriptors array using date and name NSSortDescriptor *dateDescriptor = [[NSSortDescriptor alloc] initWithKey:@"datePerformed" ascending:NO]; NSSortDescriptor *nameDescriptor = [[NSSortDescriptor alloc] initWithKey:@"name" ascending:YES]; NSArray *sortDescriptors = [[NSArray alloc] initWithObjects:dateDescriptor, nameDescriptor, nil]; [fetchRequest setSortDescriptors:sortDescriptors]; // Create and initialize the fetch results controller NSFetchedResultsController *aFetchedResultsController = [[NSFetchedResultsController alloc] initWithFetchRequest:fetchRequest managedObjectContext:managedObjectContext sectionNameKeyPath:@"datePerformed" cacheName:@"Root"]; self.fetchedResultsController = aFetchedResultsController; fetchedResultsController.delegate = self; // Memory management calls [aFetchedResultsController release]; [fetchRequest release]; [dateDescriptor release]; [nameDescriptor release]; [sortDescriptors release]; return fetchedResultsController; } Here's where I set up the table header properties - (NSString *)tableView:(UITableView *)tableView titleForHeaderInSection:(NSInteger)section { // Display the exercise' date as section headings. return [[[fetchedResultsController sections] objectAtIndex:section] name]; } Any suggestions welcome. Thanks for your time. -Billy Williamton

    Read the article

  • Memcached getDelayed alternative implementation

    - by iBobo
    I would like to use getDelayed on the PHP Memcached extension but I think it's not implemented in the right way. Right now you ask for some keys and then retrieve all of them with fetch() and fetchAll(). But imagine a scenario where I need to retrieve 15 keys used in different parts of the page which I don't know in advance, but I can ask the various objects to give me the list. What I want is give the Memcached instance this list (each component would give its part) then later when I need them retrieve from the instance, but not all of them at once: each component would take the one it needs. Basically if I were to implement this I would prohibit using getDelayed alone and implement a bookGet($keys) method where you would add the keys to book (which actually calls getDelayed), and redefine get to handle these three cases: key is booked and retrieved - return the value; key is booked but not retrieved - go and force the fetch of the booked keys and return the correct value; key not booked - do a normal lookup. I want to know if this makes sense, your thoughts on the subject and if someone already implemented this or maybe PECL Memcached already works this way and actually the documentation doesn't explain it correctly.

    Read the article

  • jQuery Mobile: How to run a callback-function before $.mobile.changePage?

    - by Jugi84
    I'm setting up a mobile app with jQuery Mobile and PhoneGap. What I want to do is that when I click link with id "news_link" I want jQuery to fetch data from database and put it to jQm's collapsible set. In my solution, I've wanted to use $.mobile.changePage inside callback functions because I want to wait until the data has been fetched and put inside the container before the code changes page. Here's the code: $(function() { $('#news_link').click(function(){ loadNewsFromDB(function(){ $.mobile.changePage( $("#news"), { transition: "slideup"} ); }); }); loadNewsFromDB is PhoneGap's SQLite database function which would fetch the data with more callback functions from database. The contents would be put to desired container with .html-function function loadNewsFromDB(){ //some code here theDB.transaction(function(tx) { tx.executeSql(sqlStr, [], onLoadNewsSuccess, onLoadNewsError); },onTxError, onTxSuccess); function onLoadNewsSuccess(tx, results){ //some code here $("#newsSet").html(htmlCodeAndContentToPut); Html is jQm's basic ui-modules: <!-- some html code here --!> <div data-role="page" id="news"> <div data-role="content"> <div id="newsSet" data-role="collapsible-set" data-mini="true"> With debugging I know that the code work up until it works very well unti it arrives to $.mobile.changePage-function. Somehow it doesn't change the page. I've checked the function it works well in other occasions, but not inside callback-functions. I would be very grateful if someone would give me a hand am I not seeing something obviously wrong or is something more specific? What other workarounds there is for fetching the data from SQLite, waiting for data to be fetched, putting the data to container and then changing the pages?

    Read the article

  • IDENTITY_INSERT ON inside of cursor does not allow inserted id

    - by Mac
    I am trying to set some id's for a bunch of rows in a database where the id column is an identity. I've created a cursor to loop through the rows and update the ids with incrementing negative numbers (-1,-2,-3 etc). When I updated just one row turning on the IDENTITY_INSERT it worked fine but as soon as I try and use it in a cursor, it throws the following error. Msg 8102, Level 16, State 1, Line 22 Cannot update identity column 'myRowID'. DECLARE @MinId INT; SET @MinId = (SELECT MIN(myRowId) FROM myTable)-1; DECLARE myCursor CURSOR FOR SELECT myRowId FROM dbo.myTable WHERE myRowId > 17095 OPEN myCursor DECLARE @myRowId INT FETCH NEXT FROM myCursor INTO @myRowId WHILE (@@FETCH_STATUS <> -1) BEGIN SET IDENTITY_INSERT dbo.myTable ON; --UPDATE dbo.myTable --SET myRowId = @MinId --WHERE myRowId = @myRowId; PRINT (N'ID: ' + CAST(@myRowId AS VARCHAR(10)) + N' NewID: ' + CAST(@MinId AS VARCHAR(4))); SET @MinId = @MinId - 1; FETCH NEXT FROM myCursor INTO @myRowId END CLOSE myCursor DEALLOCATE myCursor GO SET IDENTITY_INSERT dbo.myTable OFF; GO Does anyone know what I'm doing wrong?

    Read the article

  • Which DVCS would work best on Windows for my scenario?

    - by PoorLuzer
    At work I use ClearCase and SourceSafe, but have found some time to do some time to code for myself enroute thanks to a disposable laptop. However, I wish I had a lightweight VCS on my system using which I would be able to make changes to my code during the commute and then push/grab them from my Linux systems. I use git on my home system, but I can't really get it working on Windows. I don't want all that cygwin hack. If it does not run natively on Windows, it just won't do. What have you guys tried on your Windows system? Something that YOU use. The big player at the moment seems to be Mercurial? What would be best for a one (or maybe two) man team? I just need to maintain : Versioned copies of source code. Checking in and out should be as less obtrusive as possible. I am looking forward to a multiple Undo kind of feature (like that in an EMacs buffer) but persistent. I really like the way git keeps track of lines moving between files in a source code set I should be able to move part(s)/sub tree(s) of the source tree (each sub tree implies a module/plugin to my the main software I am building) to an archival system either completly or partially and restore them back from the archive as and when required and the system should track any changes to this tree as well. I actually want to experiment with my code as much as possible without me manually keeping track of what I modified and what I need to undo once I try out some idea, so that I am back to where I want to continue from. Notes : A similar topic came up a year ago : http://stackoverflow.com/questions/4670/dvcs-choices-whats-good-for-windows I hope things have changed, and I really want people to share their own, real life experiences. Not something they recommend without using it or they think will work.

    Read the article

  • Hibernate not loading associated object

    - by Noor
    Hi, i am trying to load a hibernate object ForumMessage but in it contain another object Users and the Users object is not being loaded. My ForumMessage Mapping File: <?xml version="1.0"?> <!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN" "http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd"> <!-- Generated Jan 4, 2011 10:10:29 AM by Hibernate Tools 3.4.0.Beta1 --> <hibernate-mapping> <class name="com.BiddingSystem.Models.ForumMessage" table="FORUMMESSAGE"> <id name="ForumMessageId" type="long"> <column name="FORUMMESSAGEID" /> <generator class="native" /> </id> <property name="ForumMessage" type="java.lang.String"> <column name="FORUMMESSAGE" /> </property> <many-to-one name="User" class="com.BiddingSystem.Models.Users" fetch="join"> <column name="UserId" /> </many-to-one> <property name="DatePosted" type="java.util.Date"> <column name="DATEPOSTED" /> </property> <many-to-one name="Topic" class="com.BiddingSystem.Models.ForumTopic" fetch="join"> <column name="TopicId" /> </many-to-one> </class> </hibernate-mapping> and I am using the follwing code: Session session = gileadHibernateUtil.getSessionFactory().openSession(); SQL="from ForumMessage"; System.out.println(SQL); Query query=session.createQuery(SQL); System.out.println(query.list().size()); return new LinkedList <ForumMessage>(query.list());

    Read the article

  • loading remote page into DOM with javascript

    - by scoobydoo
    I am trying to write a web widget which will allow users to display customized information (from my website) in their own web page. The mechanism I want to use (for creating the web widget) is javascript. So basically, I want to be able to write some javascript code like this (this is what the end user copies into their HTML page, to get my widget displayed in their page) <script type="text/javascript"> /* javascript here to fetch page from remote url and insert into DOM */ </script> I have two questions: how do I write a javascript code to fetch the page from the remote url? Ideally this will be PLAIN javascript (i.e. not using jQuery etc - since I dont want to force the user to get third party scripts jQuery which may conflict with other scripts on their page etc) The page I am fetching contains inline javascript, which gets executed in an body.onLoad event, as well as other functions which are used in response to user actions - my questions are: i). will the body.onLoad event be triggered for the retrieved document?. ii). If the retrieved page is dumped directly into the DOM, then the document will contain two <body> sections, which is no longer valid (X)HTML - however, I need the body.onLoad event to be triggered for the page to be setup correctly, and I also need the other functions in the retrieved page, for the retrieved page to be able to respond to the user interaction. Any suggestions/tips on how I can solve these problems?

    Read the article

  • I need help with a SQL query. Fetching an entry, it's most recent revision and it's fields.

    - by Tigger ate my dad
    Hi there, I'm building a CMS for my own needs, and finished planning my database layout. Basically I am abstracting all possible data-models into "sections" and all entries into one table. The final layout is as follows: Database diagram: I have yet to be allowed to post images, so here is a link to a diagram of my database. Entries (section_entries) are children of their section (sections). I save all edits to the entries in a new revision (section_entries_revisions), and also track revisions on the sections (section_revisions), in order to match the values of a revision, to the fields of the section that existed when the entry-revision was made. The section-revisions can have a number of fields (section_revision_fields) that define the attributes of entries in the section. There is a many-to-many relationship between the fields (section_revision_fields) and the entry-revisions (section_entry_revisions), that stores the values of the attributes defined by the section revision. Feel free to ask questions if the diagram is confusing. Now, this is the most complex SQL I've ever worked with, and the task of fetching my data is a little daunting. Basically what i want help with, is fetching an entry, when the only known variables are; section_id, section_entry_id. The query should fetch the most recent revision of that entry, and the section_revision model corresponding to section_revision_id in the section_entry_revisions table. It should also fetch the values of the fields in the section-revision. I was hoping for a query result, where there would be as many rows as fields in the section. Each row would contain the information of the entry and the section, and then information for one of the fields (e.g. each row corresponding to a field and it's value). I tried to explain the best I could. Again, feel free to ask questions if my description somehow lacking. I hope someone is up for the challenge. Best regards. :-)

    Read the article

  • Maven exec bash script and save output as property

    - by djechlin
    I'm wondering if there exists a Maven plugin that runs a bash script and saves the results of it into a property. My actual use case is to get the git source version. I found one plugin available online but it didn't look well tested, and it occurred to me that a plugin as simple as the one in the title of this post is all I need. Plugin would look something like: <plugin>maven-run-script-plugin> <phase>process-resources</phase> <!-- not sure where most intelligent --> <configuration> <script>"git rev-parse HEAD"</script> <!-- must run from build directory --> <targetProperty>"properties.gitVersion"</targetProperty> </configuration> </plugin> Of course necessary to make sure this happens before the property will be needed, and in my case I want to use this property to process a source file.

    Read the article

  • num_rows is 0 when it should be >0 for php mysqli code

    - by jpporterVA
    My num_rows is coming back as 0, and I've tried calling it several ways, but I'm stuck. Here is my code: $conn = new mysqli($dbserver, "dbuser", "dbpass", $dbname); // get the data $sql = 'SELECT AT.activityName, AT.createdOn FROM userActivity UA, users U, activityType AT WHERE U.userId = UA.userId and AT.activityType = UA.activityType and U.username = ? order by AT.createdOn'; $stmt = $conn->stmt_init(); $stmt->prepare($sql); $stmt->bind_param('s', $requestedUsername); $stmt->bind_result($activityName, $createdOn); $stmt->execute(); // display the data $numrows = $stmt->num_rows; $result=array("user activity report for: " . $requestedUsername . " with " . $numrows . " rows:"); $result[]="Created On --- Activity Name"; while ($stmt->fetch()) { $msg = " " . $createdOn . " --- " . $activityName . " "; $result[] = $msg; } $stmt->close(); There are multiple rows found, and the fetch loop process them just fine. Any suggestions on what will enable me to get the number of rows returned in the query? Suggestions are much appreciated. Thanks in advance.

    Read the article

  • Hibernate many-to-many relationship

    - by Capitan
    I have two mapped types, related many-to-many. @Entity @Table(name = "students") public class Student{ ... @ManyToMany(fetch = FetchType.EAGER) @JoinTable( name = "students2courses", joinColumns = { @JoinColumn( name = "student_id", referencedColumnName = "_id") }, inverseJoinColumns = { @JoinColumn( name = "course_id", referencedColumnName = "_id") }) public Set<Course> getCourses() { return courses; } public void setCourses(Set<Course> courses) { this.courses = courses; } ... } __ @Entity @Table(name = "courses") public class Course{ ... @ManyToMany(fetch = FetchType.EAGER, mappedBy = "courses") public Set<Student> getStudents() { return students; } public void setStudents(Set<Student> students) { this.students = students; } ... } But if I update/delete Course entity, records are not created/deleted in table students2courses. (with Student entity updating/deleting goes as expected) I wrote abstract class HibObject public abstract class HibObject { public String getRemoveMTMQuery() { return null; } } which is inherited by Student and Course. In DAO I added this code (for delete() method): String query = obj.getRemoveMTMQuery(); if (query != null) { session.createSQLQuery(query).executeUpdate(); } and I ovrerided method getRemoveMTMQuery() for Course @Override @Transient public String getRemoveMTMQuery() { return "delete from students2courses where course_id = " + id + ";"; } Now it works but I think it's a bad code. Is there a best way to solve this problem?

    Read the article

  • Is fetching data from database a get-method thing?

    - by theva
    I have a small class that I call Viewer. This class is supposed to view the proper layout of each page or something like that... I have a method called getFirstPage, when called the user of this method will get a setting value for which page is currently set as the first page. I have some code here, I think it works but I am not really shure that I have done it the right way: class Viewer { private $db; private $user; private $firstPage; function __construct($db, $user) { $this->db = $db; if(isset($user)) { $this->user = $user; } else { $this->user = 'default'; } } function getFistPage() { $std = $db->prepare("SELECT firstPage FROM settings WHERE user = ':user'"); $std->execute(array(':user' => $user)); $result = $std->fetch(); $this->firstPage = $result['firstPage']; return $this->firstPage; } } My get method is fetching the setting from databse (so far so good?). The problem is that then I have to use this get method to set the private variable firstPage. It seems like I should have a set method to do this, but I cannot really have a set method that just fetch some setting from database, right? Because the user of this object should be able to assume that there already is a setting defined in the object... How should I do that?

    Read the article

  • No result when Rally.data.WsapiDataStore lacks permissions

    - by user1195996
    I'm calling Ext.create('Rally.data.WsapiDataStore', params), and looking for results with the load event. I'm requesting a number of objects across programs that the user may or may not have read permission for. This works fine for queries where the user has permissions. But in the case where the user does not have permission and presumably gets zero results back, the load event does not seem to fire at all. I would expect it to fire with the unsuccessful flag or else to return with empty results. Since I don't know that the request has failed, my program waits and waits. How can I tell if a this request fails to return because of security? BTW, looking at the network stats, I believe all my requests get a "200 OK" status back. Here is the method I use to create the various data stores: _createDataStore: function(params) { this.openRequests++; var createParams = { model: params.type, autoLoad: true, // So I can later determine which query type it is, and which program requestType: params.requestType == undefined ? params.type : params.requestType, program: this.program, listeners: { load: this._onDataLoaded, scope: this }, filters: params.filters, pageSize: params.pageSize, fetch: params.fetch, context: { project: this.project, projectScopeUp: false, projectScopeDown: true }, pageSize: 1 // We only need the count }; console.log('_createDataStore', this.program, createParams.requestType); Ext.create('Rally.data.WsapiDataStore', createParams); }, And here is the _onDataLoaded method: _onDataLoaded: function(store, data, successB) { console.log('_onDataLoaded', this.program, successB); ... I only see this function called for those queries for which the account has permissions.

    Read the article

  • How should I solve this MySql problem (PHP) ? (Beginner)

    - by Camran
    I have several tables in a MySql database. I have a classifieds website, and at the bottom I display the users last visited classifieds. I do this by storing the ID:s of the ads to an array in the cookie. Now, my db is made up like this kindof: Main Table: // Stores global information, ie these fields have to be filled out in every record, never be blank ID Price category Seller Item Table: // Stores descriptive info about whats for sale ID AD_ID (FK) //This is the same as ID in the MAIN TABLE Color Size Mileage etc My problem is that I need to know what category the ad is in, in order to query mysql for the right information I think. So I need two variables, but the cookie only has one (ID) stored. Offcourse I could make two queries, first one just matching the ID to the main_table and fetch the category from the Main_table. Then make the second query and fetch all other info from the right table. Here is an example if the category was Vehicles: SELECT * FROM main_table, vehicles_table, WHERE main_table.id=$id_from_cookie AND main_table.ad_id=vehicles_table.ad_id As you can see above, I need the category to write in what table to check, right? But I think there must be a smarter way, like fetching them in one single query using only one variable (id from cookie)? How should I do this? Understand? Let me know if you need more input... Thanks

    Read the article

  • Passing list of items from Controller/Model to a variable in javascript - autocomplete

    - by newbie_developer
    I've a method in a NamesModel which fetches all the names and returns a list of names: public static List<NamesModel> GetAllNames() { List<NamesModel> names = new List<NamesModel>(); // // code to fetch records // return names; } In my controller: public ActionResult Index() { NamesModel model = new NamesModel(); model.GetAllNames(); return View(model); } In the view, I've got a textbox: @Html.TextBox("search-name") Now in my javascript, I want to fetch all names into a variable either from a model (from method) or from controller, for example: <script type="text/javascript"> $(function () { var names = ........... $(document).ready(function () { $('#search-name').autocomplete({ source: names }); }); }); </script> If I use hardcoding then it works but I want to use the names stored in the db. Is it possible? hardcoding example: var names = ["abc", "xyz"];

    Read the article

  • OIM 11g notification framework

    - by Rajesh G Kumar
    OIM 11g has introduced an improved and template based Notifications framework. New release has removed the limitation of sending text based emails (out-of-the-box emails) and enhanced to support html features. New release provides in-built out-of-the-box templates for events like 'Reset Password', 'Create User Self Service' , ‘User Deleted' etc. Also provides new APIs to support custom templates to send notifications out of OIM. OIM notification framework supports notification mechanism based on events, notification templates and template resolver. They are defined as follows: Ø Events are defined as XML file and imported as part of MDS database in order to make notification event available for use. Ø Notification templates are created using OIM advance administration console. The template contains the text and the substitution 'variables' which will be replaced with the data provided by the template resolver. Templates support internationalization and can be defined as HTML or in form of simple text. Ø Template resolver is a Java class that is responsible to provide attributes and data to be used at runtime and design time. It must be deployed following the OIM plug-in framework. Resolver data provided at design time is to be used by end user to design notification template with available entity variables and it also provides data at runtime to replace the designed variable with value to be displayed to recipients. Steps to define custom notifications in OIM 11g are: Steps# Steps 1. Define the Notification Event 2. Create the Custom Template Resolver class 3. Create Template with notification contents to be sent to recipients 4. Create Event triggering spots in OIM 1. Notification Event metadata The Notification Event is defined as XML file which need to be imported into MDS database. An event file must be compliant with the schema defined by the notification engine, which is NotificationEvent.xsd. The event file contains basic information about the event.XSD location in MDS database: “/metadata/iam-features-notification/NotificationEvent.xsd”Schema file can be viewed by exporting file from MDS using weblogicExportMetadata.sh script.Sample Notification event metadata definition: 1: <?xml version="1.0" encoding="UTF-8"?> 2: <Events xmlns:xsi=http://www.w3.org/2001/XMLSchema-instance xsi:noNamespaceSchemaLocation="../../../metadata/NotificationEvent.xsd"> 3: <EventType name="Sample Notification"> 4: <StaticData> 5: <Attribute DataType="X2-Entity" EntityName="User" Name="Granted User"/> 6: </StaticData> 7: <Resolver class="com.iam.oim.demo.notification.DemoNotificationResolver"> 8: <Param DataType="91-Entity" EntityName="Resource" Name="ResourceInfo"/> 9: </Resolver> 10: </EventType> 11: </Events> Line# Description 1. XML file notation tag 2. Events is root tag 3. EventType tag is to declare a unique event name which will be available for template designing 4. The StaticData element lists a set of parameters which allow user to add parameters that are not data dependent. In other words, this element defines the static data to be displayed when notification is to be configured. An example of static data is the User entity, which is not dependent on any other data and has the same set of attributes for all event instances and notification templates. Available attributes are used to be defined as substitution tokens in the template. 5. Attribute tag is child tag for StaticData to declare the entity and its data type with unique reference name. User entity is most commonly used Entity as StaticData. 6. StaticData closing tag 7. Resolver tag defines the resolver class. The Resolver class must be defined for each notification. It defines what parameters are available in the notification creation screen and how those parameters are replaced when the notification is to be sent. Resolver class resolves the data dynamically at run time and displays the attributes in the UI. 8. The Param DataType element lists a set of parameters which allow user to add parameters that are data dependent. An example of the data dependent or a dynamic entity is a resource object which user can select at run time. A notification template is to be configured for the resource object. Corresponding to the resource object field, a lookup is displayed on the UI. When a user selects the event the call goes to the Resolver class provided to fetch the fields that are displayed in the Available Data list, from which user can select the attribute to be used on the template. Param tag is child tag to declare the entity and its data type with unique reference name. 9. Resolver closing tag 10 EventType closing tag 11. Events closing tag Note: - DataType needs to be declared as “X2-Entity” for User entity and “91-Entity” for Resource or Organization entities. The dynamic entities supported for lookup are user, resource, and organization. Once notification event metadata is defined, need to be imported into MDS database. Fully qualified resolver class name need to be define for XML but do not need to load the class in OIM yet (it can be loaded later). 2. Coding the notification resolver All event owners have to provide a resolver class which would resolve the data dynamically at run time. Custom resolver class must implement the interface oracle.iam.notification.impl.NotificationEventResolver and override the implemented methods with actual implementation. It has 2 methods: S# Methods Descriptions 1. public List<NotificationAttribute> getAvailableData(String eventType, Map<String, Object> params); This API will return the list of available data variables. These variables will be available on the UI while creating/modifying the Templates and would let user select the variables so that they can be embedded as a token as part of the Messages on the template. These tokens are replaced by the value passed by the resolver class at run time. Available data is displayed in a list. The parameter "eventType" specifies the event Name for which template is to be read.The parameter "params" is the map which has the entity name and the corresponding value for which available data is to be fetched. Sample code snippet: List<NotificationAttribute> list = new ArrayList<NotificationAttribute>(); long objKey = (Long) params.get("resource"); //Form Field details based on Resource object key HashMap<String, Object> formFieldDetail = getObjectFormName(objKey); for (Iterator<?> itrd = formFieldDetail.entrySet().iterator(); itrd.hasNext(); ) { NotificationAttribute availableData = new NotificationAttribute(); Map.Entry formDetailEntrySet = (Entry<?, ?>)itrd.next(); String fieldLabel = (String)formDetailEntrySet.getValue(); availableData.setName(fieldLabel); list.add(availableData); } return list; 2. Public HashMap<String, Object> getReplacedData(String eventType, Map<String, Object> params); This API would return the resolved value of the variables present on the template at the runtime when notification is being sent. The parameter "eventType" specifies the event Name for which template is to be read.The parameter "params" is the map which has the base values such as usr_key, obj_key etc required by the resolver implementation to resolve the rest of the variables in the template. Sample code snippet: HashMap<String, Object> resolvedData = new HashMap<String, Object>();String firstName = getUserFirstname(params.get("usr_key"));resolvedData.put("fname", firstName); String lastName = getUserLastName(params.get("usr_key"));resolvedData.put("lname", lastname);resolvedData.put("count", "1 million");return resolvedData; This code must be deployed as per OIM 11g plug-in framework. The XML file defining the plug-in is as below: <?xml version="1.0" encoding="UTF-8"?> <oimplugins xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <plugins pluginpoint="oracle.iam.notification.impl.NotificationEventResolver"> <plugin pluginclass= " com.iam.oim.demo.notification.DemoNotificationResolver" version="1.0" name="Sample Notification Resolver"/> </plugins> </oimplugins> 3. Defining the template To create a notification template: Log in to the Oracle Identity Administration Click the System Management tab and then click the Notification tab From the Actions list on the left pane, select Create On the Create page, enter values for the following fields under the Template Information section: Template Name: Demo template Description Text: Demo template Under the Event Details section, perform the following: From the Available Event list, select the event for which the notification template is to be created from a list of available events. Depending on your selection, other fields are displayed in the Event Details section. Note that the template Sample Notification Event created in the previous step being used as the notification event. The contents of the Available Data drop down are based on the event XML StaticData tag, the drop down basically lists all the attributes of the entities defined in that tag. Once you select an element in the drop down, it will show up in the Selected Data text field and then you can just copy it and paste it into either the message subject or the message body fields prefixing $ symbol. Example if list has attribute like First_Name then message body will contains this as $First_Name which resolver will parse and replace it with actual value at runtime. In the Resource field, select a resource from the lookup. This is the dynamic data defined by the Param DataType element in the XML definition. Based on selected resource getAvailableData method of resolver will be called to fetch the resource object attribute detail, if method is overridden with required implementation. For current scenario, Map<String, Object> params will get populated with object key as value and key as “resource” in the map. This is the only input will be provided to resolver at design time. You need to implement the further logic to fetch the object attributes detail to populate the available Data list. List string should not have space in between, if object attributes has space for attribute name then implement logic to replace the space with ‘_’ before populating the list. Example if attribute name is “First Name” then make it “First_Name” and populate the list. Space is not supported while you try to parse and replace the token at run time with real value. Make a note that the Available Data and Selected Data are used in the substitution tokens definition only, they do not define the final data that will be sent in the notification. OIM will invoke the resolver class to get the data and make the substitutions. Under the Locale Information section, enter values in the following fields: To specify a form of encoding, select either UTF-8 or ASCII. In the Message Subject field, enter a subject for the notification. From the Type options, select the data type in which you want to send the message. You can choose between HTML and Text/Plain. In the Short Message field, enter a gist of the message in very few words. In the Long Message field, enter the message that will be sent as the notification with Available data token which need to be replaced by resolver at runtime. After you have entered the required values in all the fields, click Save. A message is displayed confirming the creation of the notification template. Click OK 4. Triggering the event A notification event can be triggered from different places in OIM. The logic behind the triggering must be coded and plugged into OIM. Examples of triggering points for notifications: Event handlers: post process notifications for specific data updates in OIM users Process tasks: to notify the users that a provisioning task was executed by OIM Scheduled tasks: to notify something related to the task The scheduled job has two parameters: Template Name: defines the notification template to be sent User Login: defines the user record that will provide the data to be sent in the notification Sample Code Snippet: public void execute(String templateName , String userId) { try { NotificationService notService = Platform.getService(NotificationService.class); NotificationEvent eventToSend=this.createNotificationEvent(templateName,userId); notService.notify(eventToSend); } catch (Exception e) { e.printStackTrace(); } } private NotificationEvent createNotificationEvent(String poTemplateName, String poUserId) { NotificationEvent event = new NotificationEvent(); String[] receiverUserIds= { poUserId }; event.setUserIds(receiverUserIds); event.setTemplateName(poTemplateName); event.setSender(null); HashMap<String, Object> templateParams = new HashMap<String, Object>(); templateParams.put("USER_LOGIN",poUserId); event.setParams(templateParams); return event; } public HashMap getAttributes() { return null; } public void setAttributes() {} }

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • CodePlex Daily Summary for Friday, May 28, 2010

    CodePlex Daily Summary for Friday, May 28, 2010New ProjectsBang: BangBox Office: Event Management for Community Theater Groups: Box Office is an event management web application to help theater groups manage & promote their shows. Manage performance schedules, sell tickets, ...CellsOnWeb: El espacio de las células del Programa Académico Microsoft en Argentina. CRM 4.0 Plugin Queue Item Counter: This is a crm 4.0 plugin to count queue items in each folder and display the number at the end of the name. For example, if the queue name is "Tes...Date Calculator: Date Calculator is a small desktop utility developed using Windows Forms .NET technology. This utility is analogous to the "Date calculation" modul...Enterprise Library Investigate: Enterprise Library Investigate ProjecteProject Management: Ứng dụng nền tảng web hỗ trợ quản lí và giám sát tiến độ dự án của tổ chức doanh nghiệp.Fiddler TreeView Panel Extension: Extension for Fiddler, to display the session information in a TreeView panel instead of the default ListBox, so it groups the information logicall...Git Source Control Provider: Git Source Control Provider is a Visual Studio Plug-in that integrates Git with Visual Studio.InspurProjects: Project on Inspur Co.Kryptonite: The Kryptonite project aims to improve development of websites based on the Kentico CMS. MLang .NET Wrapper: Detect the encoding of a text without BOM (Byte Order Mask) and choose the best Encoding for persistence or network transport of textMondaze: Proof of concept using Windows Azure.MultipointControls: A collection of controls that applied Windows Multipoint Mouse SDK. Windows Multipoint Mouse SDK enable app to have multiple mice interact simultan...Mundo De Bloques: "Mundo de bloques" makes it easier for analists to find the shortest way between two states in a problem using an heuristic function for Artificial...MyRPGtests: Just some tests :)OffInvoice Add-in for MS Office 2010: Project Description: The project it's based in the ability to extend funtionality in the Microsoft Office 2010 suite.OpenGraph .NET: A C# client for the Facebook Graph API. Supports desktop, web, ASP.NET MVC, and Silverlight connections and real-time updates. PLEASE NOTE: I dis...Portable Extensible Metadata (PEM) Data Annotation Generator: This project intends to help developers who uses PEM - Portable Extensible Metadata for Entity Framework generating Data Annotation information fro...Production and sale of plastic window systems: Automation company produces window design, production and sale of plastic window systems, management of sales contracts and their execution, print ...Renjian Storm (Renjian Image Viewer Uploader): Renjian Image Viewer UploaderShark Web Intelligence CMS: Shark Web Intelligence Inc. Content Management System.Shuffleboard Game for Windows Phone 7: This is a sample Shuffleboard game written in Silverlight for Windows Phone 7. It demonstrates physics, procedural animation, perspective transform...Silverlight Property Grid: Visual Studio Style PropertyGrid for Silverlight.SvnToTfs: Simple tool that migrates every Subversion revision toward Team Foundation Server 2010. It is developed in C# witn a WPF front-end.Tamias: Basic Cms Mvc Contrib Portable Area: The goal of this project is to have a easy-to-integrate basic cms for ASP.NET MVC applications based on MVC Contrib Portable Areas.TwitBy: TwitBy is a Twitter client for anyone who uses Twitter. It's easy to use and all of the major features are there. More features to come. H...Under Construction: A simple site that can be used as a splash for sites being upgraded or developed. UO Editor: The Owner & Organisation Editor makes it easy to view and edit the names of the registered owner and registered organization for your Windows OS. N...webform2010: this is the test projectWireless Network: ssWiX Toolset: The Windows Installer XML (WiX) is a toolset that builds Windows installation packages from XML source code. The toolset supports a command line en...Xna.Extend: A collection of easy to use Xna components for aiding a game programmer in developing thee next big thing. I plan on using the components from this...New ReleasesA Guide to Parallel Programming: Drop 4 - Guide Preface, Chapters 1 - 5, and code: This is Drop 4 with Guide Preface, Chapters 1 - 5, and References, and the accompanying code samples. This drop requires Visual Studio 2010 Beta 2 ...Ajax Toolkit for ASP.NET MVC: MAT 1.1: MAT 1.1Community Forums NNTP bridge: Community Forums NNTP Bridge V09: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release solves ...Community Forums NNTP bridge: Community Forums NNTP Bridge V10: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...Community Forums NNTP bridge: Community Forums NNTP Bridge V11: Release of the Community Forums NNTP Bridge to access the social and anwsers MS forums with a single, open source NNTP bridge. This release has add...CSS 360 Planetary Calendar: Beta Release: =============================================================================== Beta Release Version: 0.2 Description: This is the beta release de...Date Calculator: DateCalculator v1.0: This is the first release and as far as I know this is a stable version.eComic: eComic 2010.0.0.4: Version 2010.0.0.4 Change LogFixed issues in the "Full Screen Control Panel" causing it to lack translucence Added loupe magnification control ...Expression Encoder Batch Processor: Runtime Application v0.2: New in this version: Added more error handling if files not exist. Added button/feature to quit after current encoding job. Added code to handl...Fiddler TreeView Panel Extension: FiddlerTreeViewPanel 0.7: Initial compiled version of the assembly, ready to use. Please refer to http://fiddlertreeviewpanel.codeplex.com/ for instructions and installation.Gardens Point LEX: Gardens Point LEX v1.1.4: The main distribution is a zip file. This contains the binary executable, documentation, source code and the examples. ChangesVersion 1.1.4 corre...Gardens Point Parser Generator: Gardens Point Parser Generator v1.4.1: Version 1.4.1 differs from version 1.4.0 only in containing a corrected version of a previously undocumented feature which allows the generation of...IsWiX: IsWiX 1.0.264.0: Build 1.0.264.0 - built against Fireworks 1.0.264.0. Adds support for autogenerating the SourceDir prepreprocessor variable and gives user choice t...Matrix: Matrix 0.5.2: Updated licenseMesopotamia Experiment: Mesopotamia 1.2.90: Release Notes - Ugraded to Microsoft Robotics Developer Studio 2008 R3 Bug Fixes - Fix to keep any sole organisms that penetrate to the next fitne...Microsoft Crm 4.0 Filtered Lookup: Microsoft Crm 4.0 Filtered Lookup: How to use: Allow passing custom querystring values: Create a DWORD registry key named [DisableParameterFilter] under [HKEY_LOCAL_MACHINE\SOFTWAR...MSBuild Extension Pack: May 2010: The MSBuild Extension Pack May 2010 release provides a collection of over 340 MSBuild tasks. A high level summary of what the tasks currently cover...MultiPoint Vote: MultiPointVote v.1: This accepts user inputs: number of participants, poll/survey title and the list of options A text file containing the items listed line per line...Mundo De Bloques: Mundo de Bloques, Release 1: "Mundo de bloques" makes it easier for analists to find the shortest way between two states in a problem using an heuristic function for Artificial...OffInvoice Add-in for MS Office 2010: OffInvoice for Office 2010 V1.0 Installer: Add-in for MS Word 2010 or MS Excel 2010 to allow the management (issuing, visualization and reception) of electronic invoices, based in the XML fo...OpenGraph .NET: 0.9.1 Beta: This is the first public release of OpenGraph .NET.patterns & practices: Composite WPF and Silverlight: Prism v2.2 - May 2010 Release: Composite Application Guidance for WPF and Silverlight - May 2010 Release (Prism V2.2) The Composite Application Guidance for WPF and Silverlight ...Portable Extensible Metadata (PEM) Data Annotation Generator: Release 49376: First release.Production and sale of plastic window systems: Yanuary 2009: NOTEBefore loading program, make sure you have installed MySQL and created DataBase that store in Source Code (look at below) Where Is The Source?...PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT: PROGRAMMABLE SOFTWARE DEVELOPMENT ENVIRONMENT--3.2: The current version of the Programmable Software Development Environment has the capability of reading an optional text file in each source develop...Rapidshare Episode Downloader: RED 0.8.6: - Fixed Edit form to actually save the data - Added Bypass Validation to enable future episodes - Added Search parameter to Edit form - Added refr...Renjian Storm (Renjian Image Viewer Uploader): Renjian Storm 0.6: 人间风暴 v0.6 稳定版sELedit: sELedit v1.1b: + Fixed: when export and import items to text files, there was a bug with "NULL" bytes in the unicode stringShake - C# Make: Shake v0.1.21: Changes: FileTask CopyDir method modified, see documentationSharePoint Labs: SPLab7001A-ENU-Level100: SPLab7001A-ENU-Level100 This SharePoint Lab will teach how to analyze and audit WSP files. WSP files are somewhere in a no man's land between ITPro...SharePoint Rsync List: 1.0.0.3: Fix spcontext dispose bug in menu try and run jobs only on central admin server mark a single file failure if file not copied don't delete destinat...Shuffleboard Game for Windows Phone 7: Shuffleboard 1.0.0.1: Source code, solution files, and assets.Software Is Hardwork: Sw. Is Hw. Lib. 3.0.0.x+04: Sw. Is Hw. Lib. 3.0.0.x+04SoulHackers Demon Unite(Chinese version): WPFClient pre alpha: can unite 2, 3 or more demons. can un-unite 1 demon to 2 demon (no triple un-unite yet).Team Deploy: Team Deploy 2010 R1: This is the initial release for Team Deploy 2010 for TFS Team Build 2010. All features from Team Build 2.x are functional in this version. Comple...Under Construction: Under Construction: All Files required to show under construction page. The Page will pull through the Domain name that the site is being run on this allows you to use...Unit Driven: Version 0.0.5: - Tests nested by namespace parts. - Run buttons properly disabled based on currently running tests. - Timeouts for async tests enabled.UO Editor: UO Editor v1.0: Initial ReleaseVCC: Latest build, v2.1.30527.0: Automatic drop of latest buildWeb Service Software Factory Contrib: Import WSDL 2010: Generate Service Contract models from existing WSDL documents for Web Service Software Factory 2010. Usage: Install the vsix and right click on a S...Most Popular ProjectsRawrWBFS ManagerAJAX Control ToolkitMicrosoft SQL Server Product Samples: DatabaseSilverlight ToolkitWindows Presentation Foundation (WPF)patterns & practices – Enterprise LibraryMicrosoft SQL Server Community & SamplesPHPExcelASP.NETMost Active ProjectsAStar.netpatterns & practices – Enterprise LibraryGMap.NET - Great Maps for Windows Forms & PresentationSqlServerExtensionsBlogEngine.NETRawrpatterns & practices: Windows Azure Security GuidanceCodeReviewCustomer Portal Accelerator for Microsoft Dynamics CRMIonics Isapi Rewrite Filter

    Read the article

  • Asynchronous Streaming in ASP.NET WebApi

    - by andresv
     Hi everyone, if you use the cool MVC4 WebApi you might encounter yourself in a common situation where you need to return a rather large amount of data (most probably from a database) and you want to accomplish two things: Use streaming so the client fetch the data as needed, and that directly correlates to more fetching in the server side (from our database, for example) without consuming large amounts of memory. Leverage the new MVC4 WebApi and .NET 4.5 async/await asynchronous execution model to free ASP.NET Threadpool threads (if possible).  So, #1 and #2 are not directly related to each other and we could implement our code fulfilling one or the other, or both. The main point about #1 is that we want our method to immediately return to the caller a stream, and that client side stream be represented by a server side stream that gets written (and its related database fetch) only when needed. In this case we would need some form of "state machine" that keeps running in the server and "knows" what is the next thing to fetch into the output stream when the client ask for more content. This technique is generally called a "continuation" and is nothing new in .NET, in fact using an IEnumerable<> interface and the "yield return" keyword does exactly that, so our first impulse might be to write our WebApi method more or less like this:           public IEnumerable<Metadata> Get([FromUri] int accountId)         {             // Execute the command and get a reader             using (var reader = GetMetadataListReader(accountId))             {                 // Read rows asynchronously, put data into buffer and write asynchronously                 while (reader.Read())                 {                     yield return MapRecord(reader);                 }             }         }   While the above method works, unfortunately it doesn't accomplish our objective of returning immediately to the caller, and that's because the MVC WebApi infrastructure doesn't yet recognize our intentions and when it finds an IEnumerable return value, enumerates it before returning to the client its values. To prove my point, I can code a test method that calls this method, for example:        [TestMethod]         public void StreamedDownload()         {             var baseUrl = @"http://localhost:57771/api/metadata/1";             var client = new HttpClient();             var sw = Stopwatch.StartNew();             var stream = client.GetStreamAsync(baseUrl).Result;             sw.Stop();             Debug.WriteLine("Elapsed time Call: {0}ms", sw.ElapsedMilliseconds); } So, I would expect the line "var stream = client.GetStreamAsync(baseUrl).Result" returns immediately without server-side fetching of all data in the database reader, and this didn't happened. To make the behavior more evident, you could insert a wait time (like Thread.Sleep(1000);) inside the "while" loop, and you will see that the client call (GetStreamAsync) is not going to return control after n seconds (being n == number of reader records being fetched).Ok, we know this doesn't work, and the question would be: is there a way to do it?Fortunately, YES!  and is not very difficult although a little more convoluted than our simple IEnumerable return value. Maybe in the future this scenario will be automatically detected and supported in MVC/WebApi.The solution to our needs is to use a very handy class named PushStreamContent and then our method signature needs to change to accommodate this, returning an HttpResponseMessage instead of our previously used IEnumerable<>. The final code will be something like this: public HttpResponseMessage Get([FromUri] int accountId)         {             HttpResponseMessage response = Request.CreateResponse();             // Create push content with a delegate that will get called when it is time to write out              // the response.             response.Content = new PushStreamContent(                 async (outputStream, httpContent, transportContext) =>                 {                     try                     {                         // Execute the command and get a reader                         using (var reader = GetMetadataListReader(accountId))                         {                             // Read rows asynchronously, put data into buffer and write asynchronously                             while (await reader.ReadAsync())                             {                                 var rec = MapRecord(reader);                                 var str = await JsonConvert.SerializeObjectAsync(rec);                                 var buffer = UTF8Encoding.UTF8.GetBytes(str);                                 // Write out data to output stream                                 await outputStream.WriteAsync(buffer, 0, buffer.Length);                             }                         }                     }                     catch(HttpException ex)                     {                         if (ex.ErrorCode == -2147023667) // The remote host closed the connection.                          {                             return;                         }                     }                     finally                     {                         // Close output stream as we are done                         outputStream.Close();                     }                 });             return response;         } As an extra bonus, all involved classes used already support async/await asynchronous execution model, so taking advantage of that was very easy. Please note that the PushStreamContent class receives in its constructor a lambda (specifically an Action) and we decorated our anonymous method with the async keyword (not a very well known technique but quite handy) so we can await over the I/O intensive calls we execute like reading from the database reader, serializing our entity and finally writing to the output stream.  Well, if we execute the test again we will immediately notice that the a line returns immediately and then the rest of the server code is executed only when the client reads through the obtained stream, therefore we get low memory usage and far greater scalability for our beloved application serving big chunks of data.Enjoy!Andrés.        

    Read the article

  • Overcoming the 1024 character limit with setx

    - by Madhur Ahuja
    I am trying to set environment variables using the setx command, such as follows setx PATH "f:\common tools\git\bin;f:\common tools\python\app;f:\common tools\python\app\scripts;f:\common tools\ruby\bin;f:\masm32\bin;F:\Borland\BCC55\Bin;%PATH%" However, I get the following error if the value is more then 1024 characters long: WARNING: The data being saved is truncated to 1024 characters. SUCCESS: Specified value was saved. But some of the paths in the end are not saved in variable, I guess due to character limit as the error suggests.

    Read the article

< Previous Page | 119 120 121 122 123 124 125 126 127 128 129 130  | Next Page >